May 17 00:33:55.876485 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:33:55.876502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:55.876511 kernel: BIOS-provided physical RAM map: May 17 00:33:55.876516 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:33:55.876522 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 17 00:33:55.876527 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 17 00:33:55.876533 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 17 00:33:55.876539 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 17 00:33:55.876544 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 17 00:33:55.876550 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 17 00:33:55.876556 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 17 00:33:55.876561 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 17 00:33:55.876566 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 17 00:33:55.876572 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 17 00:33:55.876579 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 17 00:33:55.876586 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 17 00:33:55.876591 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 17 00:33:55.876597 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:33:55.876603 kernel: NX (Execute Disable) protection: active May 17 00:33:55.876609 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 17 00:33:55.876614 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 17 00:33:55.876620 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 17 00:33:55.876626 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 17 00:33:55.876631 kernel: extended physical RAM map: May 17 00:33:55.876637 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:33:55.876644 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 17 00:33:55.876649 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 17 00:33:55.876655 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 17 00:33:55.876661 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 17 00:33:55.876667 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 17 00:33:55.876672 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 17 00:33:55.876678 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 17 00:33:55.876684 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 17 00:33:55.876690 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 17 00:33:55.876695 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 17 00:33:55.876701 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 17 00:33:55.876708 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 17 00:33:55.876713 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 17 00:33:55.876719 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 17 00:33:55.876725 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 17 00:33:55.876733 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 17 00:33:55.876740 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 17 00:33:55.876746 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:33:55.876753 kernel: efi: EFI v2.70 by EDK II May 17 00:33:55.876759 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 17 00:33:55.876766 kernel: random: crng init done May 17 00:33:55.876772 kernel: SMBIOS 2.8 present. May 17 00:33:55.876778 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 17 00:33:55.876784 kernel: Hypervisor detected: KVM May 17 00:33:55.876790 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:33:55.876797 kernel: kvm-clock: cpu 0, msr 2519a001, primary cpu clock May 17 00:33:55.876803 kernel: kvm-clock: using sched offset of 4135381648 cycles May 17 00:33:55.876811 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:33:55.876817 kernel: tsc: Detected 2794.746 MHz processor May 17 00:33:55.876824 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:33:55.876830 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:33:55.876837 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 17 00:33:55.876843 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:33:55.876849 kernel: Using GB pages for direct mapping May 17 00:33:55.876856 kernel: Secure boot disabled May 17 00:33:55.876862 kernel: ACPI: Early table checksum verification disabled May 17 00:33:55.876869 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 17 00:33:55.876876 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 17 00:33:55.876882 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:55.876888 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:55.876895 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 17 00:33:55.876901 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:55.876907 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:55.876914 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:55.876920 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:33:55.876927 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:33:55.876934 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 17 00:33:55.876940 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 17 00:33:55.876946 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 17 00:33:55.876953 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 17 00:33:55.876959 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 17 00:33:55.876965 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 17 00:33:55.876971 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 17 00:33:55.876978 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 17 00:33:55.876985 kernel: No NUMA configuration found May 17 00:33:55.876992 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 17 00:33:55.876998 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 17 00:33:55.877004 kernel: Zone ranges: May 17 00:33:55.877011 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:33:55.877017 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 17 00:33:55.877024 kernel: Normal empty May 17 00:33:55.877030 kernel: Movable zone start for each node May 17 00:33:55.877036 kernel: Early memory node ranges May 17 00:33:55.877044 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:33:55.877050 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 17 00:33:55.877056 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 17 00:33:55.877063 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 17 00:33:55.877069 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 17 00:33:55.877075 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 17 00:33:55.877081 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 17 00:33:55.877088 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:33:55.877094 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:33:55.877100 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 17 00:33:55.877107 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:33:55.877114 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 17 00:33:55.877120 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:33:55.877126 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 17 00:33:55.877133 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:33:55.877139 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:33:55.877146 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:33:55.877152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:33:55.877158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:33:55.877166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:33:55.877172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:33:55.877178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:33:55.877185 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:33:55.877191 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:33:55.877197 kernel: TSC deadline timer available May 17 00:33:55.877203 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:33:55.877210 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:33:55.877216 kernel: kvm-guest: setup PV sched yield May 17 00:33:55.877232 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 17 00:33:55.877241 kernel: Booting paravirtualized kernel on KVM May 17 00:33:55.877255 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:33:55.877294 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 17 00:33:55.877304 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 17 00:33:55.877313 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 17 00:33:55.877321 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:33:55.877330 kernel: kvm-guest: setup async PF for cpu 0 May 17 00:33:55.877338 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 17 00:33:55.877345 kernel: kvm-guest: PV spinlocks enabled May 17 00:33:55.877351 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:33:55.877358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 17 00:33:55.877367 kernel: Policy zone: DMA32 May 17 00:33:55.877375 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:55.877382 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:33:55.877388 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:33:55.877396 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:33:55.877403 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:33:55.877410 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) May 17 00:33:55.877417 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:33:55.877424 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:33:55.877431 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:33:55.877437 kernel: rcu: Hierarchical RCU implementation. May 17 00:33:55.877444 kernel: rcu: RCU event tracing is enabled. May 17 00:33:55.877451 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:33:55.877459 kernel: Rude variant of Tasks RCU enabled. May 17 00:33:55.877466 kernel: Tracing variant of Tasks RCU enabled. May 17 00:33:55.877473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:33:55.877479 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:33:55.877486 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:33:55.877493 kernel: Console: colour dummy device 80x25 May 17 00:33:55.877499 kernel: printk: console [ttyS0] enabled May 17 00:33:55.877506 kernel: ACPI: Core revision 20210730 May 17 00:33:55.877513 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:33:55.877521 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:33:55.877527 kernel: x2apic enabled May 17 00:33:55.877534 kernel: Switched APIC routing to physical x2apic. May 17 00:33:55.877541 kernel: kvm-guest: setup PV IPIs May 17 00:33:55.877548 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:33:55.877554 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:33:55.877561 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 17 00:33:55.877568 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:33:55.877575 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:33:55.877582 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:33:55.877589 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:33:55.877596 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:33:55.877603 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:33:55.877609 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:33:55.877616 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:33:55.877623 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:33:55.877630 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:33:55.877637 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:33:55.877645 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:33:55.877651 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:33:55.877658 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:33:55.877665 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:33:55.877672 kernel: Freeing SMP alternatives memory: 32K May 17 00:33:55.877678 kernel: pid_max: default: 32768 minimum: 301 May 17 00:33:55.877685 kernel: LSM: Security Framework initializing May 17 00:33:55.877691 kernel: SELinux: Initializing. May 17 00:33:55.877698 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:33:55.877706 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:33:55.877713 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:33:55.877720 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:33:55.877726 kernel: ... version: 0 May 17 00:33:55.877733 kernel: ... bit width: 48 May 17 00:33:55.877740 kernel: ... generic registers: 6 May 17 00:33:55.877746 kernel: ... value mask: 0000ffffffffffff May 17 00:33:55.877753 kernel: ... max period: 00007fffffffffff May 17 00:33:55.877760 kernel: ... fixed-purpose events: 0 May 17 00:33:55.877767 kernel: ... event mask: 000000000000003f May 17 00:33:55.877774 kernel: signal: max sigframe size: 1776 May 17 00:33:55.877780 kernel: rcu: Hierarchical SRCU implementation. May 17 00:33:55.877787 kernel: smp: Bringing up secondary CPUs ... May 17 00:33:55.877794 kernel: x86: Booting SMP configuration: May 17 00:33:55.877800 kernel: .... node #0, CPUs: #1 May 17 00:33:55.877807 kernel: kvm-clock: cpu 1, msr 2519a041, secondary cpu clock May 17 00:33:55.877814 kernel: kvm-guest: setup async PF for cpu 1 May 17 00:33:55.877820 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 17 00:33:55.877828 kernel: #2 May 17 00:33:55.877835 kernel: kvm-clock: cpu 2, msr 2519a081, secondary cpu clock May 17 00:33:55.877841 kernel: kvm-guest: setup async PF for cpu 2 May 17 00:33:55.877848 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 17 00:33:55.877855 kernel: #3 May 17 00:33:55.877861 kernel: kvm-clock: cpu 3, msr 2519a0c1, secondary cpu clock May 17 00:33:55.877868 kernel: kvm-guest: setup async PF for cpu 3 May 17 00:33:55.877875 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 17 00:33:55.877881 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:33:55.877888 kernel: smpboot: Max logical packages: 1 May 17 00:33:55.877896 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 17 00:33:55.877902 kernel: devtmpfs: initialized May 17 00:33:55.877909 kernel: x86/mm: Memory block size: 128MB May 17 00:33:55.877916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 17 00:33:55.877923 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 17 00:33:55.877929 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 17 00:33:55.877936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 17 00:33:55.877943 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 17 00:33:55.877951 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:33:55.877957 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:33:55.877964 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:33:55.877971 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:33:55.877978 kernel: audit: initializing netlink subsys (disabled) May 17 00:33:55.877984 kernel: audit: type=2000 audit(1747442035.817:1): state=initialized audit_enabled=0 res=1 May 17 00:33:55.877991 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:33:55.877998 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:33:55.878004 kernel: cpuidle: using governor menu May 17 00:33:55.878012 kernel: ACPI: bus type PCI registered May 17 00:33:55.878018 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:33:55.878025 kernel: dca service started, version 1.12.1 May 17 00:33:55.878032 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:33:55.878039 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 17 00:33:55.878045 kernel: PCI: Using configuration type 1 for base access May 17 00:33:55.878052 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:33:55.878059 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:33:55.878066 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:33:55.878073 kernel: ACPI: Added _OSI(Module Device) May 17 00:33:55.878080 kernel: ACPI: Added _OSI(Processor Device) May 17 00:33:55.878086 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:33:55.878093 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:33:55.878100 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:33:55.878107 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:33:55.878113 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:33:55.878120 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:33:55.878127 kernel: ACPI: Interpreter enabled May 17 00:33:55.878134 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:33:55.878141 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:33:55.878148 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:33:55.878154 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:33:55.878161 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:33:55.878302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:33:55.878378 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:33:55.878446 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:33:55.878458 kernel: PCI host bridge to bus 0000:00 May 17 00:33:55.878530 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:33:55.878592 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:33:55.878653 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:33:55.878712 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:33:55.878770 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:33:55.878829 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 17 00:33:55.878890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:33:55.878968 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:33:55.879049 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:33:55.879153 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 17 00:33:55.879232 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 17 00:33:55.880418 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:33:55.880588 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 17 00:33:55.880679 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:33:55.880772 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:33:55.880847 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 17 00:33:55.880915 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 17 00:33:55.880981 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 17 00:33:55.881057 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:33:55.881142 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 17 00:33:55.881214 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 17 00:33:55.881313 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 17 00:33:55.881422 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:33:55.881560 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 17 00:33:55.881652 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 17 00:33:55.881738 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 17 00:33:55.881811 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 17 00:33:55.881896 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:33:55.881965 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:33:55.882036 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:33:55.882113 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 17 00:33:55.882181 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 17 00:33:55.882313 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:33:55.882404 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 17 00:33:55.882415 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:33:55.882422 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:33:55.882429 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:33:55.882437 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:33:55.882444 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:33:55.882451 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:33:55.882458 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:33:55.882468 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:33:55.882475 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:33:55.882482 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:33:55.882489 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:33:55.882496 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:33:55.882503 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:33:55.882510 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:33:55.882517 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:33:55.882525 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:33:55.882534 kernel: iommu: Default domain type: Translated May 17 00:33:55.882541 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:33:55.882607 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:33:55.882673 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:33:55.882739 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:33:55.882748 kernel: vgaarb: loaded May 17 00:33:55.882755 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:33:55.882763 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:33:55.882772 kernel: PTP clock support registered May 17 00:33:55.882779 kernel: Registered efivars operations May 17 00:33:55.882787 kernel: PCI: Using ACPI for IRQ routing May 17 00:33:55.882794 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:33:55.882801 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 17 00:33:55.882809 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 17 00:33:55.882816 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 17 00:33:55.882823 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 17 00:33:55.882830 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 17 00:33:55.882838 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 17 00:33:55.882846 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:33:55.882853 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:33:55.882860 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:33:55.882868 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:33:55.882875 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:33:55.882883 kernel: pnp: PnP ACPI init May 17 00:33:55.882959 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:33:55.882972 kernel: pnp: PnP ACPI: found 6 devices May 17 00:33:55.882979 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:33:55.882986 kernel: NET: Registered PF_INET protocol family May 17 00:33:55.882994 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:33:55.883001 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:33:55.883009 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:33:55.883016 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:33:55.883023 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:33:55.883030 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:33:55.883039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:33:55.883046 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:33:55.883053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:33:55.883060 kernel: NET: Registered PF_XDP protocol family May 17 00:33:55.883163 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 17 00:33:55.883306 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 17 00:33:55.883415 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:33:55.883511 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:33:55.883619 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:33:55.883717 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:33:55.883813 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:33:55.883909 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 17 00:33:55.883925 kernel: PCI: CLS 0 bytes, default 64 May 17 00:33:55.883937 kernel: Initialise system trusted keyrings May 17 00:33:55.883948 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:33:55.883959 kernel: Key type asymmetric registered May 17 00:33:55.883969 kernel: Asymmetric key parser 'x509' registered May 17 00:33:55.883984 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:33:55.883995 kernel: io scheduler mq-deadline registered May 17 00:33:55.884022 kernel: io scheduler kyber registered May 17 00:33:55.884036 kernel: io scheduler bfq registered May 17 00:33:55.884046 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:33:55.884059 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:33:55.884070 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:33:55.884082 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:33:55.884092 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:33:55.884104 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:33:55.884114 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:33:55.884131 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:33:55.884142 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:33:55.884152 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:33:55.884293 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:33:55.884362 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:33:55.884425 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:33:55 UTC (1747442035) May 17 00:33:55.884490 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:33:55.884500 kernel: efifb: probing for efifb May 17 00:33:55.884507 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 17 00:33:55.884515 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 17 00:33:55.884522 kernel: efifb: scrolling: redraw May 17 00:33:55.884530 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:33:55.884537 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:33:55.884544 kernel: fb0: EFI VGA frame buffer device May 17 00:33:55.884551 kernel: pstore: Registered efi as persistent store backend May 17 00:33:55.884561 kernel: NET: Registered PF_INET6 protocol family May 17 00:33:55.884568 kernel: Segment Routing with IPv6 May 17 00:33:55.884576 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:33:55.884586 kernel: NET: Registered PF_PACKET protocol family May 17 00:33:55.884594 kernel: Key type dns_resolver registered May 17 00:33:55.884602 kernel: IPI shorthand broadcast: enabled May 17 00:33:55.884610 kernel: sched_clock: Marking stable (469130056, 162397102)->(651766994, -20239836) May 17 00:33:55.884618 kernel: registered taskstats version 1 May 17 00:33:55.884625 kernel: Loading compiled-in X.509 certificates May 17 00:33:55.884633 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:33:55.884640 kernel: Key type .fscrypt registered May 17 00:33:55.884647 kernel: Key type fscrypt-provisioning registered May 17 00:33:55.884655 kernel: pstore: Using crash dump compression: deflate May 17 00:33:55.884662 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:33:55.884671 kernel: ima: Allocated hash algorithm: sha1 May 17 00:33:55.884679 kernel: ima: No architecture policies found May 17 00:33:55.884686 kernel: clk: Disabling unused clocks May 17 00:33:55.884694 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:33:55.884701 kernel: Write protecting the kernel read-only data: 28672k May 17 00:33:55.884709 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:33:55.884716 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:33:55.884723 kernel: Run /init as init process May 17 00:33:55.884730 kernel: with arguments: May 17 00:33:55.884739 kernel: /init May 17 00:33:55.884746 kernel: with environment: May 17 00:33:55.884753 kernel: HOME=/ May 17 00:33:55.884760 kernel: TERM=linux May 17 00:33:55.884767 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:33:55.884777 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:33:55.884787 systemd[1]: Detected virtualization kvm. May 17 00:33:55.884795 systemd[1]: Detected architecture x86-64. May 17 00:33:55.884804 systemd[1]: Running in initrd. May 17 00:33:55.884812 systemd[1]: No hostname configured, using default hostname. May 17 00:33:55.884819 systemd[1]: Hostname set to . May 17 00:33:55.884827 systemd[1]: Initializing machine ID from VM UUID. May 17 00:33:55.884835 systemd[1]: Queued start job for default target initrd.target. May 17 00:33:55.884843 systemd[1]: Started systemd-ask-password-console.path. May 17 00:33:55.884850 systemd[1]: Reached target cryptsetup.target. May 17 00:33:55.884858 systemd[1]: Reached target paths.target. May 17 00:33:55.884865 systemd[1]: Reached target slices.target. May 17 00:33:55.884875 systemd[1]: Reached target swap.target. May 17 00:33:55.884882 systemd[1]: Reached target timers.target. May 17 00:33:55.884891 systemd[1]: Listening on iscsid.socket. May 17 00:33:55.884898 systemd[1]: Listening on iscsiuio.socket. May 17 00:33:55.884906 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:33:55.884914 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:33:55.884921 systemd[1]: Listening on systemd-journald.socket. May 17 00:33:55.884931 systemd[1]: Listening on systemd-networkd.socket. May 17 00:33:55.884939 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:33:55.884947 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:33:55.884954 systemd[1]: Reached target sockets.target. May 17 00:33:55.884962 systemd[1]: Starting kmod-static-nodes.service... May 17 00:33:55.884970 systemd[1]: Finished network-cleanup.service. May 17 00:33:55.884978 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:33:55.884986 systemd[1]: Starting systemd-journald.service... May 17 00:33:55.884993 systemd[1]: Starting systemd-modules-load.service... May 17 00:33:55.885005 systemd[1]: Starting systemd-resolved.service... May 17 00:33:55.885022 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:33:55.885035 systemd[1]: Finished kmod-static-nodes.service. May 17 00:33:55.885046 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:33:55.885057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:33:55.885075 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:33:55.885087 kernel: audit: type=1130 audit(1747442035.875:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.885098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:33:55.885109 kernel: audit: type=1130 audit(1747442035.881:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.885129 systemd-journald[199]: Journal started May 17 00:33:55.885191 systemd-journald[199]: Runtime Journal (/run/log/journal/62c3e149b67e4e4093787dc8df449cf9) is 6.0M, max 48.4M, 42.4M free. May 17 00:33:55.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.875415 systemd-modules-load[200]: Inserted module 'overlay' May 17 00:33:55.888395 systemd[1]: Started systemd-journald.service. May 17 00:33:55.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.892294 kernel: audit: type=1130 audit(1747442035.888:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.892655 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:33:55.901996 systemd-resolved[201]: Positive Trust Anchors: May 17 00:33:55.902560 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:33:55.902725 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:33:55.904986 systemd-resolved[201]: Defaulting to hostname 'linux'. May 17 00:33:55.905966 systemd[1]: Started systemd-resolved.service. May 17 00:33:55.906312 systemd[1]: Reached target nss-lookup.target. May 17 00:33:55.910437 kernel: audit: type=1130 audit(1747442035.905:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.920294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:33:55.920324 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:33:55.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.922913 systemd[1]: Starting dracut-cmdline.service... May 17 00:33:55.926763 kernel: audit: type=1130 audit(1747442035.921:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.927730 systemd-modules-load[200]: Inserted module 'br_netfilter' May 17 00:33:55.928731 kernel: Bridge firewalling registered May 17 00:33:55.933850 dracut-cmdline[219]: dracut-dracut-053 May 17 00:33:55.936098 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:33:55.945289 kernel: SCSI subsystem initialized May 17 00:33:55.957464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:33:55.957494 kernel: device-mapper: uevent: version 1.0.3 May 17 00:33:55.957505 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:33:55.960317 systemd-modules-load[200]: Inserted module 'dm_multipath' May 17 00:33:55.961313 systemd[1]: Finished systemd-modules-load.service. May 17 00:33:55.964030 systemd[1]: Starting systemd-sysctl.service... May 17 00:33:55.968657 kernel: audit: type=1130 audit(1747442035.962:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:55.975448 systemd[1]: Finished systemd-sysctl.service. May 17 00:33:55.980291 kernel: audit: type=1130 audit(1747442035.976:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.010299 kernel: Loading iSCSI transport class v2.0-870. May 17 00:33:56.032299 kernel: iscsi: registered transport (tcp) May 17 00:33:56.057607 kernel: iscsi: registered transport (qla4xxx) May 17 00:33:56.057668 kernel: QLogic iSCSI HBA Driver May 17 00:33:56.087141 systemd[1]: Finished dracut-cmdline.service. May 17 00:33:56.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.089585 systemd[1]: Starting dracut-pre-udev.service... May 17 00:33:56.093688 kernel: audit: type=1130 audit(1747442036.088:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.136293 kernel: raid6: avx2x4 gen() 25393 MB/s May 17 00:33:56.166291 kernel: raid6: avx2x4 xor() 7264 MB/s May 17 00:33:56.183302 kernel: raid6: avx2x2 gen() 30124 MB/s May 17 00:33:56.200292 kernel: raid6: avx2x2 xor() 19154 MB/s May 17 00:33:56.217290 kernel: raid6: avx2x1 gen() 24996 MB/s May 17 00:33:56.234292 kernel: raid6: avx2x1 xor() 15200 MB/s May 17 00:33:56.251291 kernel: raid6: sse2x4 gen() 14185 MB/s May 17 00:33:56.268302 kernel: raid6: sse2x4 xor() 7030 MB/s May 17 00:33:56.285297 kernel: raid6: sse2x2 gen() 15669 MB/s May 17 00:33:56.302296 kernel: raid6: sse2x2 xor() 9783 MB/s May 17 00:33:56.319291 kernel: raid6: sse2x1 gen() 12168 MB/s May 17 00:33:56.336706 kernel: raid6: sse2x1 xor() 7688 MB/s May 17 00:33:56.336726 kernel: raid6: using algorithm avx2x2 gen() 30124 MB/s May 17 00:33:56.336736 kernel: raid6: .... xor() 19154 MB/s, rmw enabled May 17 00:33:56.337439 kernel: raid6: using avx2x2 recovery algorithm May 17 00:33:56.349290 kernel: xor: automatically using best checksumming function avx May 17 00:33:56.438296 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:33:56.446610 systemd[1]: Finished dracut-pre-udev.service. May 17 00:33:56.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.450000 audit: BPF prog-id=7 op=LOAD May 17 00:33:56.450000 audit: BPF prog-id=8 op=LOAD May 17 00:33:56.451296 kernel: audit: type=1130 audit(1747442036.447:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.451476 systemd[1]: Starting systemd-udevd.service... May 17 00:33:56.462994 systemd-udevd[403]: Using default interface naming scheme 'v252'. May 17 00:33:56.466455 systemd[1]: Started systemd-udevd.service. May 17 00:33:56.467963 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:33:56.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.478845 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation May 17 00:33:56.498894 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:33:56.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.509038 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:33:56.540330 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:33:56.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:56.576292 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:33:56.581950 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:33:56.581966 kernel: GPT:9289727 != 19775487 May 17 00:33:56.581975 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:33:56.581984 kernel: GPT:9289727 != 19775487 May 17 00:33:56.581992 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:33:56.582001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:56.585302 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:33:56.590294 kernel: libata version 3.00 loaded. May 17 00:33:56.601483 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:33:56.629817 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:33:56.629838 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:33:56.629953 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:33:56.629966 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:33:56.630064 kernel: AES CTR mode by8 optimization enabled May 17 00:33:56.630076 kernel: scsi host0: ahci May 17 00:33:56.630192 kernel: scsi host1: ahci May 17 00:33:56.630332 kernel: scsi host2: ahci May 17 00:33:56.630467 kernel: scsi host3: ahci May 17 00:33:56.632174 kernel: scsi host4: ahci May 17 00:33:56.632290 kernel: scsi host5: ahci May 17 00:33:56.632376 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 17 00:33:56.632386 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 17 00:33:56.632398 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 17 00:33:56.632406 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 17 00:33:56.632415 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 17 00:33:56.632423 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 17 00:33:56.630416 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:33:56.639363 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) May 17 00:33:56.641660 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:33:56.646143 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:33:56.648323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:33:56.658749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:33:56.660387 systemd[1]: Starting disk-uuid.service... May 17 00:33:56.751668 disk-uuid[525]: Primary Header is updated. May 17 00:33:56.751668 disk-uuid[525]: Secondary Entries is updated. May 17 00:33:56.751668 disk-uuid[525]: Secondary Header is updated. May 17 00:33:56.755155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:56.938899 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:33:56.938968 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:33:56.938978 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:33:56.940302 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:33:56.941295 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:33:56.942743 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:33:56.942754 kernel: ata3.00: applying bridge limits May 17 00:33:56.944313 kernel: ata3.00: configured for UDMA/100 May 17 00:33:56.946319 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:33:56.948295 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:33:56.981381 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:33:56.998962 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:33:56.998977 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:33:57.761296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:33:57.761679 disk-uuid[526]: The operation has completed successfully. May 17 00:33:57.782887 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:33:57.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.782971 systemd[1]: Finished disk-uuid.service. May 17 00:33:57.790056 systemd[1]: Starting verity-setup.service... May 17 00:33:57.804312 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:33:57.824078 systemd[1]: Found device dev-mapper-usr.device. May 17 00:33:57.826409 systemd[1]: Mounting sysusr-usr.mount... May 17 00:33:57.829638 systemd[1]: Finished verity-setup.service. May 17 00:33:57.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.885297 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:33:57.885818 systemd[1]: Mounted sysusr-usr.mount. May 17 00:33:57.886238 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:33:57.887318 systemd[1]: Starting ignition-setup.service... May 17 00:33:57.889822 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:33:57.900301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:57.900341 kernel: BTRFS info (device vda6): using free space tree May 17 00:33:57.900355 kernel: BTRFS info (device vda6): has skinny extents May 17 00:33:57.908136 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:33:57.915972 systemd[1]: Finished ignition-setup.service. May 17 00:33:57.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.917364 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:33:57.956807 ignition[644]: Ignition 2.14.0 May 17 00:33:57.956820 ignition[644]: Stage: fetch-offline May 17 00:33:57.956912 ignition[644]: no configs at "/usr/lib/ignition/base.d" May 17 00:33:57.956924 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:57.960109 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:33:57.957035 ignition[644]: parsed url from cmdline: "" May 17 00:33:57.957039 ignition[644]: no config URL provided May 17 00:33:57.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:57.963000 audit: BPF prog-id=9 op=LOAD May 17 00:33:57.957044 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:33:57.963871 systemd[1]: Starting systemd-networkd.service... May 17 00:33:57.957053 ignition[644]: no config at "/usr/lib/ignition/user.ign" May 17 00:33:57.957072 ignition[644]: op(1): [started] loading QEMU firmware config module May 17 00:33:57.957077 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:33:57.965077 ignition[644]: op(1): [finished] loading QEMU firmware config module May 17 00:33:58.000029 systemd-networkd[722]: lo: Link UP May 17 00:33:58.000040 systemd-networkd[722]: lo: Gained carrier May 17 00:33:58.000465 systemd-networkd[722]: Enumeration completed May 17 00:33:58.000563 systemd[1]: Started systemd-networkd.service. May 17 00:33:58.001715 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:33:58.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.003393 systemd-networkd[722]: eth0: Link UP May 17 00:33:58.003397 systemd-networkd[722]: eth0: Gained carrier May 17 00:33:58.004308 systemd[1]: Reached target network.target. May 17 00:33:58.011723 systemd[1]: Starting iscsiuio.service... May 17 00:33:58.015744 systemd[1]: Started iscsiuio.service. May 17 00:33:58.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.018192 systemd[1]: Starting iscsid.service... May 17 00:33:58.021153 iscsid[727]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:33:58.021153 iscsid[727]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:33:58.021153 iscsid[727]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:33:58.021153 iscsid[727]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:33:58.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.030399 ignition[644]: parsing config with SHA512: 3047726d3fc27ea76da0ae7f9fe0d175ea034769202b8bde18298a3a4a2911e9847f8503e56ca2fb9463ee0fb04175d4940d69cb7921c46f5c300f9b0e7f624e May 17 00:33:58.035682 iscsid[727]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:33:58.035682 iscsid[727]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:33:58.022598 systemd[1]: Started iscsid.service. May 17 00:33:58.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.040283 ignition[644]: fetch-offline: fetch-offline passed May 17 00:33:58.024700 systemd[1]: Starting dracut-initqueue.service... May 17 00:33:58.040347 ignition[644]: Ignition finished successfully May 17 00:33:58.037699 systemd[1]: Finished dracut-initqueue.service. May 17 00:33:58.039012 unknown[644]: fetched base config from "system" May 17 00:33:58.039021 unknown[644]: fetched user config from "qemu" May 17 00:33:58.040671 systemd[1]: Reached target remote-fs-pre.target. May 17 00:33:58.041586 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:33:58.042544 systemd[1]: Reached target remote-fs.target. May 17 00:33:58.044618 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:33:58.053069 systemd[1]: Starting dracut-pre-mount.service... May 17 00:33:58.055014 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:33:58.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.057024 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:33:58.059514 systemd[1]: Starting ignition-kargs.service... May 17 00:33:58.061626 systemd[1]: Finished dracut-pre-mount.service. May 17 00:33:58.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.068426 ignition[740]: Ignition 2.14.0 May 17 00:33:58.069393 ignition[740]: Stage: kargs May 17 00:33:58.069487 ignition[740]: no configs at "/usr/lib/ignition/base.d" May 17 00:33:58.070026 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:58.073236 ignition[740]: kargs: kargs passed May 17 00:33:58.073982 ignition[740]: Ignition finished successfully May 17 00:33:58.076025 systemd[1]: Finished ignition-kargs.service. May 17 00:33:58.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.078663 systemd[1]: Starting ignition-disks.service... May 17 00:33:58.087884 ignition[748]: Ignition 2.14.0 May 17 00:33:58.087893 ignition[748]: Stage: disks May 17 00:33:58.087987 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 17 00:33:58.087995 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:58.091919 ignition[748]: disks: disks passed May 17 00:33:58.091962 ignition[748]: Ignition finished successfully May 17 00:33:58.094033 systemd[1]: Finished ignition-disks.service. May 17 00:33:58.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.094620 systemd[1]: Reached target initrd-root-device.target. May 17 00:33:58.096118 systemd[1]: Reached target local-fs-pre.target. May 17 00:33:58.097827 systemd[1]: Reached target local-fs.target. May 17 00:33:58.100387 systemd[1]: Reached target sysinit.target. May 17 00:33:58.100614 systemd[1]: Reached target basic.target. May 17 00:33:58.103249 systemd[1]: Starting systemd-fsck-root.service... May 17 00:33:58.113909 systemd-fsck[757]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:33:58.129494 systemd[1]: Finished systemd-fsck-root.service. May 17 00:33:58.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.130802 systemd[1]: Mounting sysroot.mount... May 17 00:33:58.153297 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:33:58.153439 systemd[1]: Mounted sysroot.mount. May 17 00:33:58.153759 systemd[1]: Reached target initrd-root-fs.target. May 17 00:33:58.155838 systemd[1]: Mounting sysroot-usr.mount... May 17 00:33:58.156804 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:33:58.156832 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:33:58.156851 systemd[1]: Reached target ignition-diskful.target. May 17 00:33:58.158438 systemd[1]: Mounted sysroot-usr.mount. May 17 00:33:58.160654 systemd[1]: Starting initrd-setup-root.service... May 17 00:33:58.166834 initrd-setup-root[767]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:33:58.172058 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory May 17 00:33:58.174526 initrd-setup-root[783]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:33:58.177985 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:33:58.200675 systemd[1]: Finished initrd-setup-root.service. May 17 00:33:58.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.202954 systemd[1]: Starting ignition-mount.service... May 17 00:33:58.205087 systemd[1]: Starting sysroot-boot.service... May 17 00:33:58.207415 bash[808]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:33:58.214651 ignition[809]: INFO : Ignition 2.14.0 May 17 00:33:58.214651 ignition[809]: INFO : Stage: mount May 17 00:33:58.216293 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:33:58.216293 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:58.216293 ignition[809]: INFO : mount: mount passed May 17 00:33:58.216293 ignition[809]: INFO : Ignition finished successfully May 17 00:33:58.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.216993 systemd[1]: Finished ignition-mount.service. May 17 00:33:58.222529 systemd[1]: Finished sysroot-boot.service. May 17 00:33:58.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:33:58.836220 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:33:58.844976 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) May 17 00:33:58.845002 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:33:58.845011 kernel: BTRFS info (device vda6): using free space tree May 17 00:33:58.845789 kernel: BTRFS info (device vda6): has skinny extents May 17 00:33:58.849447 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:33:58.851121 systemd[1]: Starting ignition-files.service... May 17 00:33:58.864897 ignition[838]: INFO : Ignition 2.14.0 May 17 00:33:58.864897 ignition[838]: INFO : Stage: files May 17 00:33:58.866718 ignition[838]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:33:58.866718 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:33:58.866718 ignition[838]: DEBUG : files: compiled without relabeling support, skipping May 17 00:33:58.870651 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:33:58.870651 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:33:58.870651 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:33:58.870651 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:33:58.870651 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:33:58.870460 unknown[838]: wrote ssh authorized keys file for user: core May 17 00:33:58.878991 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:33:58.878991 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:33:58.878991 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:33:58.878991 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:33:58.911485 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:33:59.098566 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:33:59.098566 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:33:59.103489 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:59.117568 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:33:59.117568 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:59.117568 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:59.117568 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:33:59.117568 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:33:59.668341 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:34:00.044988 systemd-networkd[722]: eth0: Gained IPv6LL May 17 00:34:00.330369 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:34:00.330369 ignition[838]: INFO : files: op(c): [started] processing unit "containerd.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:34:00.334671 ignition[838]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:34:00.334671 ignition[838]: INFO : files: op(c): [finished] processing unit "containerd.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:34:00.334671 ignition[838]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:34:00.384670 ignition[838]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:34:00.386526 ignition[838]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:34:00.388148 ignition[838]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:34:00.390080 ignition[838]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:34:00.391905 ignition[838]: INFO : files: files passed May 17 00:34:00.392696 ignition[838]: INFO : Ignition finished successfully May 17 00:34:00.394810 systemd[1]: Finished ignition-files.service. May 17 00:34:00.400832 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:34:00.400855 kernel: audit: type=1130 audit(1747442040.394:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.400884 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:34:00.401614 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:34:00.402351 systemd[1]: Starting ignition-quench.service... May 17 00:34:00.415109 kernel: audit: type=1130 audit(1747442040.406:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.415154 kernel: audit: type=1131 audit(1747442040.406:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.406075 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:34:00.406187 systemd[1]: Finished ignition-quench.service. May 17 00:34:00.421436 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 17 00:34:00.424570 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:34:00.426621 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:34:00.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.427143 systemd[1]: Reached target ignition-complete.target. May 17 00:34:00.443193 kernel: audit: type=1130 audit(1747442040.426:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.441484 systemd[1]: Starting initrd-parse-etc.service... May 17 00:34:00.455321 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:34:00.455421 systemd[1]: Finished initrd-parse-etc.service. May 17 00:34:00.463951 kernel: audit: type=1130 audit(1747442040.456:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.463969 kernel: audit: type=1131 audit(1747442040.456:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.457252 systemd[1]: Reached target initrd-fs.target. May 17 00:34:00.465488 systemd[1]: Reached target initrd.target. May 17 00:34:00.465915 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:34:00.467015 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:34:00.478646 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:34:00.483293 kernel: audit: type=1130 audit(1747442040.478:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.483315 systemd[1]: Starting initrd-cleanup.service... May 17 00:34:00.492214 systemd[1]: Stopped target nss-lookup.target. May 17 00:34:00.492685 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:34:00.493019 systemd[1]: Stopped target timers.target. May 17 00:34:00.495933 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:34:00.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.496018 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:34:00.502869 kernel: audit: type=1131 audit(1747442040.497:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.497439 systemd[1]: Stopped target initrd.target. May 17 00:34:00.500925 systemd[1]: Stopped target basic.target. May 17 00:34:00.503173 systemd[1]: Stopped target ignition-complete.target. May 17 00:34:00.504910 systemd[1]: Stopped target ignition-diskful.target. May 17 00:34:00.506630 systemd[1]: Stopped target initrd-root-device.target. May 17 00:34:00.508148 systemd[1]: Stopped target remote-fs.target. May 17 00:34:00.509810 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:34:00.511600 systemd[1]: Stopped target sysinit.target. May 17 00:34:00.512974 systemd[1]: Stopped target local-fs.target. May 17 00:34:00.515574 systemd[1]: Stopped target local-fs-pre.target. May 17 00:34:00.516179 systemd[1]: Stopped target swap.target. May 17 00:34:00.520022 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:34:00.525656 kernel: audit: type=1131 audit(1747442040.521:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.520148 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:34:00.521744 systemd[1]: Stopped target cryptsetup.target. May 17 00:34:00.531662 kernel: audit: type=1131 audit(1747442040.527:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.526150 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:34:00.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.526262 systemd[1]: Stopped dracut-initqueue.service. May 17 00:34:00.527789 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:34:00.527894 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:34:00.532248 systemd[1]: Stopped target paths.target. May 17 00:34:00.533940 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:34:00.538344 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:34:00.539697 systemd[1]: Stopped target slices.target. May 17 00:34:00.541288 systemd[1]: Stopped target sockets.target. May 17 00:34:00.542874 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:34:00.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.542992 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:34:00.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.544668 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:34:00.550064 iscsid[727]: iscsid shutting down. May 17 00:34:00.544753 systemd[1]: Stopped ignition-files.service. May 17 00:34:00.547404 systemd[1]: Stopping ignition-mount.service... May 17 00:34:00.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.548579 systemd[1]: Stopping iscsid.service... May 17 00:34:00.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.550772 systemd[1]: Stopping sysroot-boot.service... May 17 00:34:00.551575 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:34:00.558152 ignition[878]: INFO : Ignition 2.14.0 May 17 00:34:00.558152 ignition[878]: INFO : Stage: umount May 17 00:34:00.558152 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:34:00.558152 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:34:00.558152 ignition[878]: INFO : umount: umount passed May 17 00:34:00.558152 ignition[878]: INFO : Ignition finished successfully May 17 00:34:00.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.551712 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:34:00.553402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:34:00.553510 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:34:00.556784 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:34:00.556874 systemd[1]: Stopped iscsid.service. May 17 00:34:00.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.558591 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:34:00.558660 systemd[1]: Stopped ignition-mount.service. May 17 00:34:00.560887 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:34:00.560955 systemd[1]: Finished initrd-cleanup.service. May 17 00:34:00.563781 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:34:00.563806 systemd[1]: Closed iscsid.socket. May 17 00:34:00.564994 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:34:00.565024 systemd[1]: Stopped ignition-disks.service. May 17 00:34:00.580428 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:34:00.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.580612 systemd[1]: Stopped ignition-kargs.service. May 17 00:34:00.582342 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:34:00.582372 systemd[1]: Stopped ignition-setup.service. May 17 00:34:00.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.583215 systemd[1]: Stopping iscsiuio.service... May 17 00:34:00.586688 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:34:00.587049 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:34:00.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.587135 systemd[1]: Stopped iscsiuio.service. May 17 00:34:00.588186 systemd[1]: Stopped target network.target. May 17 00:34:00.589608 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:34:00.589637 systemd[1]: Closed iscsiuio.socket. May 17 00:34:00.590157 systemd[1]: Stopping systemd-networkd.service... May 17 00:34:00.617000 audit: BPF prog-id=6 op=UNLOAD May 17 00:34:00.590309 systemd[1]: Stopping systemd-resolved.service... May 17 00:34:00.596375 systemd-networkd[722]: eth0: DHCPv6 lease lost May 17 00:34:00.647000 audit: BPF prog-id=9 op=UNLOAD May 17 00:34:00.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.597904 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:34:00.597985 systemd[1]: Stopped systemd-networkd.service. May 17 00:34:00.600862 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:34:00.600887 systemd[1]: Closed systemd-networkd.socket. May 17 00:34:00.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.602922 systemd[1]: Stopping network-cleanup.service... May 17 00:34:00.604389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:34:00.604429 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:34:00.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.605562 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:34:00.605592 systemd[1]: Stopped systemd-sysctl.service. May 17 00:34:00.607676 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:34:00.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.607706 systemd[1]: Stopped systemd-modules-load.service. May 17 00:34:00.608848 systemd[1]: Stopping systemd-udevd.service... May 17 00:34:00.610441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:34:00.610779 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:34:00.610855 systemd[1]: Stopped systemd-resolved.service. May 17 00:34:00.618404 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:34:00.618507 systemd[1]: Stopped network-cleanup.service. May 17 00:34:00.647586 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:34:00.647687 systemd[1]: Stopped systemd-udevd.service. May 17 00:34:00.650585 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:34:00.650618 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:34:00.652659 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:34:00.652692 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:34:00.653633 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:34:00.653678 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:34:00.655453 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:34:00.655483 systemd[1]: Stopped dracut-cmdline.service. May 17 00:34:00.657025 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:34:00.657060 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:34:00.671809 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:34:00.672845 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:34:00.672883 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:34:00.674881 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:34:00.674912 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:34:00.676673 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:34:00.676704 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:34:00.678349 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:34:00.678672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:34:00.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.678736 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:34:00.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:00.714580 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:34:00.714651 systemd[1]: Stopped sysroot-boot.service. May 17 00:34:00.716142 systemd[1]: Reached target initrd-switch-root.target. May 17 00:34:00.717986 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:34:00.718022 systemd[1]: Stopped initrd-setup-root.service. May 17 00:34:00.719698 systemd[1]: Starting initrd-switch-root.service... May 17 00:34:00.725148 systemd[1]: Switching root. May 17 00:34:00.727000 audit: BPF prog-id=8 op=UNLOAD May 17 00:34:00.727000 audit: BPF prog-id=7 op=UNLOAD May 17 00:34:00.727000 audit: BPF prog-id=5 op=UNLOAD May 17 00:34:00.727000 audit: BPF prog-id=4 op=UNLOAD May 17 00:34:00.727000 audit: BPF prog-id=3 op=UNLOAD May 17 00:34:00.744009 systemd-journald[199]: Journal stopped May 17 00:34:04.708342 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). May 17 00:34:04.708394 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:34:04.708406 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:34:04.708416 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:34:04.708426 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:34:04.708441 kernel: SELinux: policy capability open_perms=1 May 17 00:34:04.708451 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:34:04.708480 kernel: SELinux: policy capability always_check_network=0 May 17 00:34:04.708489 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:34:04.708498 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:34:04.708507 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:34:04.708516 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:34:04.708526 systemd[1]: Successfully loaded SELinux policy in 102.235ms. May 17 00:34:04.708542 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.133ms. May 17 00:34:04.708559 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:34:04.708569 systemd[1]: Detected virtualization kvm. May 17 00:34:04.708579 systemd[1]: Detected architecture x86-64. May 17 00:34:04.708589 systemd[1]: Detected first boot. May 17 00:34:04.708599 systemd[1]: Initializing machine ID from VM UUID. May 17 00:34:04.708608 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:34:04.708624 systemd[1]: Populated /etc with preset unit settings. May 17 00:34:04.708635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:04.708649 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:04.708660 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:04.708674 systemd[1]: Queued start job for default target multi-user.target. May 17 00:34:04.708684 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:34:04.708694 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:34:04.708704 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:34:04.708720 systemd[1]: Created slice system-getty.slice. May 17 00:34:04.708731 systemd[1]: Created slice system-modprobe.slice. May 17 00:34:04.708744 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:34:04.708754 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:34:04.708764 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:34:04.708774 systemd[1]: Created slice user.slice. May 17 00:34:04.708784 systemd[1]: Started systemd-ask-password-console.path. May 17 00:34:04.708794 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:34:04.708804 systemd[1]: Set up automount boot.automount. May 17 00:34:04.708819 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:34:04.708829 systemd[1]: Reached target integritysetup.target. May 17 00:34:04.708840 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:34:04.708850 systemd[1]: Reached target remote-fs.target. May 17 00:34:04.708860 systemd[1]: Reached target slices.target. May 17 00:34:04.708870 systemd[1]: Reached target swap.target. May 17 00:34:04.708880 systemd[1]: Reached target torcx.target. May 17 00:34:04.708890 systemd[1]: Reached target veritysetup.target. May 17 00:34:04.708906 systemd[1]: Listening on systemd-coredump.socket. May 17 00:34:04.708916 systemd[1]: Listening on systemd-initctl.socket. May 17 00:34:04.708926 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:34:04.708939 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:34:04.708949 systemd[1]: Listening on systemd-journald.socket. May 17 00:34:04.708959 systemd[1]: Listening on systemd-networkd.socket. May 17 00:34:04.708969 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:34:04.708978 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:34:04.708988 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:34:04.708998 systemd[1]: Mounting dev-hugepages.mount... May 17 00:34:04.709017 systemd[1]: Mounting dev-mqueue.mount... May 17 00:34:04.709034 systemd[1]: Mounting media.mount... May 17 00:34:04.709044 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:04.709053 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:34:04.709063 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:34:04.709074 systemd[1]: Mounting tmp.mount... May 17 00:34:04.709084 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:34:04.709094 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:04.709104 systemd[1]: Starting kmod-static-nodes.service... May 17 00:34:04.709121 systemd[1]: Starting modprobe@configfs.service... May 17 00:34:04.709131 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:04.709141 systemd[1]: Starting modprobe@drm.service... May 17 00:34:04.709151 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:04.709161 systemd[1]: Starting modprobe@fuse.service... May 17 00:34:04.709170 systemd[1]: Starting modprobe@loop.service... May 17 00:34:04.709180 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:34:04.709190 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:34:04.709205 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:34:04.709216 systemd[1]: Starting systemd-journald.service... May 17 00:34:04.709225 kernel: fuse: init (API version 7.34) May 17 00:34:04.709235 systemd[1]: Starting systemd-modules-load.service... May 17 00:34:04.709244 systemd[1]: Starting systemd-network-generator.service... May 17 00:34:04.709254 systemd[1]: Starting systemd-remount-fs.service... May 17 00:34:04.709264 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:34:04.709286 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:04.709296 kernel: loop: module loaded May 17 00:34:04.709306 systemd[1]: Mounted dev-hugepages.mount. May 17 00:34:04.709326 systemd[1]: Mounted dev-mqueue.mount. May 17 00:34:04.709338 systemd-journald[1017]: Journal started May 17 00:34:04.709376 systemd-journald[1017]: Runtime Journal (/run/log/journal/62c3e149b67e4e4093787dc8df449cf9) is 6.0M, max 48.4M, 42.4M free. May 17 00:34:04.621000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:34:04.621000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:34:04.707000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:34:04.707000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc03b0ae0 a2=4000 a3=7ffdc03b0b7c items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:04.707000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:34:04.710324 systemd[1]: Mounted media.mount. May 17 00:34:04.712319 systemd[1]: Started systemd-journald.service. May 17 00:34:04.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.714960 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:34:04.716051 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:34:04.717086 systemd[1]: Mounted tmp.mount. May 17 00:34:04.718204 systemd[1]: Finished kmod-static-nodes.service. May 17 00:34:04.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.719462 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:34:04.719615 systemd[1]: Finished modprobe@configfs.service. May 17 00:34:04.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.720774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:04.720894 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:04.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.722213 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:34:04.722406 systemd[1]: Finished modprobe@drm.service. May 17 00:34:04.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.723574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:04.723764 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:04.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.725311 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:34:04.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.726482 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:34:04.726734 systemd[1]: Finished modprobe@fuse.service. May 17 00:34:04.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.728200 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:04.728397 systemd[1]: Finished modprobe@loop.service. May 17 00:34:04.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.729589 systemd[1]: Finished systemd-modules-load.service. May 17 00:34:04.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.730958 systemd[1]: Finished systemd-network-generator.service. May 17 00:34:04.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.732239 systemd[1]: Finished systemd-remount-fs.service. May 17 00:34:04.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.733505 systemd[1]: Reached target network-pre.target. May 17 00:34:04.735797 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:34:04.737857 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:34:04.738722 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:34:04.740566 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:34:04.742791 systemd[1]: Starting systemd-journal-flush.service... May 17 00:34:04.744002 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:04.745105 systemd[1]: Starting systemd-random-seed.service... May 17 00:34:04.746504 systemd-journald[1017]: Time spent on flushing to /var/log/journal/62c3e149b67e4e4093787dc8df449cf9 is 64.318ms for 1100 entries. May 17 00:34:04.746504 systemd-journald[1017]: System Journal (/var/log/journal/62c3e149b67e4e4093787dc8df449cf9) is 8.0M, max 195.6M, 187.6M free. May 17 00:34:04.824634 systemd-journald[1017]: Received client request to flush runtime journal. May 17 00:34:04.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.746129 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:04.749741 systemd[1]: Starting systemd-sysctl.service... May 17 00:34:04.751798 systemd[1]: Starting systemd-sysusers.service... May 17 00:34:04.754911 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:34:04.756144 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:34:04.801505 systemd[1]: Finished systemd-random-seed.service. May 17 00:34:04.817439 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:34:04.818637 systemd[1]: Finished systemd-sysusers.service. May 17 00:34:04.819914 systemd[1]: Finished systemd-sysctl.service. May 17 00:34:04.820837 systemd[1]: Reached target first-boot-complete.target. May 17 00:34:04.823669 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:34:04.825514 systemd[1]: Starting systemd-udev-settle.service... May 17 00:34:04.828038 systemd[1]: Finished systemd-journal-flush.service. May 17 00:34:04.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:04.832390 udevadm[1069]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:34:04.840229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:34:04.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.503514 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:34:05.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.505409 kernel: kauditd_printk_skb: 77 callbacks suppressed May 17 00:34:05.505473 kernel: audit: type=1130 audit(1747442045.504:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.505831 systemd[1]: Starting systemd-udevd.service... May 17 00:34:05.523295 systemd-udevd[1072]: Using default interface naming scheme 'v252'. May 17 00:34:05.535773 systemd[1]: Started systemd-udevd.service. May 17 00:34:05.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.539158 systemd[1]: Starting systemd-networkd.service... May 17 00:34:05.540335 kernel: audit: type=1130 audit(1747442045.536:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.546593 systemd[1]: Starting systemd-userdbd.service... May 17 00:34:05.571285 systemd[1]: Found device dev-ttyS0.device. May 17 00:34:05.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.595169 systemd[1]: Started systemd-userdbd.service. May 17 00:34:05.607757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:34:05.612377 kernel: audit: type=1130 audit(1747442045.596:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.628000 audit[1096]: AVC avc: denied { confidentiality } for pid=1096 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:34:05.642286 kernel: audit: type=1400 audit(1747442045.628:115): avc: denied { confidentiality } for pid=1096 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:34:05.628000 audit[1096]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56534f62d770 a1=338ac a2=7fb2f9363bc5 a3=5 items=110 ppid=1072 pid=1096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:05.628000 audit: CWD cwd="/" May 17 00:34:05.628000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=1 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=2 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=3 name=(null) inode=14917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=4 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=5 name=(null) inode=14918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=6 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=7 name=(null) inode=14919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=8 name=(null) inode=14919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=9 name=(null) inode=14920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=10 name=(null) inode=14919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=11 name=(null) inode=14921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=12 name=(null) inode=14919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=13 name=(null) inode=14922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.658288 kernel: audit: type=1300 audit(1747442045.628:115): arch=c000003e syscall=175 success=yes exit=0 a0=56534f62d770 a1=338ac a2=7fb2f9363bc5 a3=5 items=110 ppid=1072 pid=1096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:05.658320 kernel: audit: type=1307 audit(1747442045.628:115): cwd="/" May 17 00:34:05.658339 kernel: audit: type=1302 audit(1747442045.628:115): item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.658357 kernel: audit: type=1302 audit(1747442045.628:115): item=1 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.658385 kernel: audit: type=1302 audit(1747442045.628:115): item=2 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.658404 kernel: audit: type=1302 audit(1747442045.628:115): item=3 name=(null) inode=14917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=14 name=(null) inode=14919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=15 name=(null) inode=14923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=16 name=(null) inode=14919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=17 name=(null) inode=14924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=18 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=19 name=(null) inode=14925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=20 name=(null) inode=14925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=21 name=(null) inode=14926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=22 name=(null) inode=14925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=23 name=(null) inode=14927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=24 name=(null) inode=14925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=25 name=(null) inode=14928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=26 name=(null) inode=14925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=27 name=(null) inode=14929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=28 name=(null) inode=14925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=29 name=(null) inode=14930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=30 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=31 name=(null) inode=14931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=32 name=(null) inode=14931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=33 name=(null) inode=14932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=34 name=(null) inode=14931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=35 name=(null) inode=14933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=36 name=(null) inode=14931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=37 name=(null) inode=14934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=38 name=(null) inode=14931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=39 name=(null) inode=14935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=40 name=(null) inode=14931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=41 name=(null) inode=14936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=42 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=43 name=(null) inode=14937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=44 name=(null) inode=14937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=45 name=(null) inode=14938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=46 name=(null) inode=14937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=47 name=(null) inode=14939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=48 name=(null) inode=14937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=49 name=(null) inode=14940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=50 name=(null) inode=14937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=51 name=(null) inode=14941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=52 name=(null) inode=14937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=53 name=(null) inode=14942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=55 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=56 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=57 name=(null) inode=14944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=58 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=59 name=(null) inode=14945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=60 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=61 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=62 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=63 name=(null) inode=14947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=64 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=65 name=(null) inode=14948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=66 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=67 name=(null) inode=14949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=68 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=69 name=(null) inode=14950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=70 name=(null) inode=14946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=71 name=(null) inode=14951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=72 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=73 name=(null) inode=14952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=74 name=(null) inode=14952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=75 name=(null) inode=14953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=76 name=(null) inode=14952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=77 name=(null) inode=14954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=78 name=(null) inode=14952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=79 name=(null) inode=14955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=80 name=(null) inode=14952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=81 name=(null) inode=14956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=82 name=(null) inode=14952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=83 name=(null) inode=14957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=84 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=85 name=(null) inode=14958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=86 name=(null) inode=14958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=87 name=(null) inode=14959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=88 name=(null) inode=14958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=89 name=(null) inode=14960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=90 name=(null) inode=14958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=91 name=(null) inode=14961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=92 name=(null) inode=14958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=93 name=(null) inode=14962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=94 name=(null) inode=14958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=95 name=(null) inode=14963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=96 name=(null) inode=14943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=97 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=98 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=99 name=(null) inode=14965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=100 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=101 name=(null) inode=14966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=102 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=103 name=(null) inode=14967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=104 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=105 name=(null) inode=14968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=106 name=(null) inode=14964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=107 name=(null) inode=14969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PATH item=109 name=(null) inode=15419 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:34:05.628000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:34:05.670040 systemd-networkd[1082]: lo: Link UP May 17 00:34:05.670047 systemd-networkd[1082]: lo: Gained carrier May 17 00:34:05.670439 systemd-networkd[1082]: Enumeration completed May 17 00:34:05.670539 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:34:05.670589 systemd[1]: Started systemd-networkd.service. May 17 00:34:05.671780 systemd-networkd[1082]: eth0: Link UP May 17 00:34:05.671785 systemd-networkd[1082]: eth0: Gained carrier May 17 00:34:05.686304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:34:05.691294 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 17 00:34:05.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.707291 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:34:05.709451 systemd-networkd[1082]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:34:05.712287 kernel: ACPI: button: Power Button [PWRF] May 17 00:34:05.726117 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 17 00:34:05.728458 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:34:05.728577 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:34:05.728687 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:34:05.735681 kernel: kvm: Nested Virtualization enabled May 17 00:34:05.735716 kernel: SVM: kvm: Nested Paging enabled May 17 00:34:05.735730 kernel: SVM: Virtual VMLOAD VMSAVE supported May 17 00:34:05.735742 kernel: SVM: Virtual GIF supported May 17 00:34:05.754289 kernel: EDAC MC: Ver: 3.0.0 May 17 00:34:05.781631 systemd[1]: Finished systemd-udev-settle.service. May 17 00:34:05.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.783610 systemd[1]: Starting lvm2-activation-early.service... May 17 00:34:05.790096 lvm[1109]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:34:05.814950 systemd[1]: Finished lvm2-activation-early.service. May 17 00:34:05.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.815946 systemd[1]: Reached target cryptsetup.target. May 17 00:34:05.817769 systemd[1]: Starting lvm2-activation.service... May 17 00:34:05.821213 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:34:05.849360 systemd[1]: Finished lvm2-activation.service. May 17 00:34:05.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:05.868107 systemd[1]: Reached target local-fs-pre.target. May 17 00:34:05.869045 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:34:05.869067 systemd[1]: Reached target local-fs.target. May 17 00:34:05.869887 systemd[1]: Reached target machines.target. May 17 00:34:05.871899 systemd[1]: Starting ldconfig.service... May 17 00:34:05.872985 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:05.873040 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:05.873924 systemd[1]: Starting systemd-boot-update.service... May 17 00:34:05.875848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:34:05.878174 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:34:05.880677 systemd[1]: Starting systemd-sysext.service... May 17 00:34:05.883898 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1114 (bootctl) May 17 00:34:05.885030 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:34:06.075914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:34:06.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.079917 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:34:06.084631 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:34:06.084844 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:34:06.091607 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:34:06.095678 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:34:06.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.092892 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:34:06.107284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:34:06.108738 systemd-fsck[1123]: fsck.fat 4.2 (2021-01-31) May 17 00:34:06.108738 systemd-fsck[1123]: /dev/vda1: 791 files, 120746/258078 clusters May 17 00:34:06.110708 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:34:06.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.113870 systemd[1]: Mounting boot.mount... May 17 00:34:06.124563 systemd[1]: Mounted boot.mount. May 17 00:34:06.131300 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:34:06.139664 systemd[1]: Finished systemd-boot-update.service. May 17 00:34:06.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.153007 (sd-sysext)[1134]: Using extensions 'kubernetes'. May 17 00:34:06.153403 (sd-sysext)[1134]: Merged extensions into '/usr'. May 17 00:34:06.169152 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:06.170460 systemd[1]: Mounting usr-share-oem.mount... May 17 00:34:06.171500 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:06.172469 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:06.174432 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:06.176414 systemd[1]: Starting modprobe@loop.service... May 17 00:34:06.177429 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:06.177531 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:06.177616 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:06.178572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:06.178704 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:06.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.182318 systemd[1]: Mounted usr-share-oem.mount. May 17 00:34:06.183670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:06.183869 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:06.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.185364 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:06.185494 systemd[1]: Finished modprobe@loop.service. May 17 00:34:06.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.186828 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:06.186915 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:06.188385 systemd[1]: Finished systemd-sysext.service. May 17 00:34:06.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.190769 systemd[1]: Starting ensure-sysext.service... May 17 00:34:06.192931 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:34:06.197984 systemd[1]: Reloading. May 17 00:34:06.207500 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:34:06.208309 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:34:06.210009 systemd-tmpfiles[1149]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:34:06.249911 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:34:06.255847 /usr/lib/systemd/system-generators/torcx-generator[1168]: time="2025-05-17T00:34:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:34:06.256149 /usr/lib/systemd/system-generators/torcx-generator[1168]: time="2025-05-17T00:34:06Z" level=info msg="torcx already run" May 17 00:34:06.332573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:06.332590 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:06.351706 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:06.402615 systemd[1]: Finished ldconfig.service. May 17 00:34:06.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.404567 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:34:06.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.407579 systemd[1]: Starting audit-rules.service... May 17 00:34:06.409563 systemd[1]: Starting clean-ca-certificates.service... May 17 00:34:06.411539 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:34:06.413896 systemd[1]: Starting systemd-resolved.service... May 17 00:34:06.416103 systemd[1]: Starting systemd-timesyncd.service... May 17 00:34:06.418726 systemd[1]: Starting systemd-update-utmp.service... May 17 00:34:06.420259 systemd[1]: Finished clean-ca-certificates.service. May 17 00:34:06.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.423216 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:34:06.425000 audit[1230]: SYSTEM_BOOT pid=1230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:34:06.425372 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:06.426681 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:06.428599 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:06.430463 systemd[1]: Starting modprobe@loop.service... May 17 00:34:06.431438 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:06.431554 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:06.431815 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:34:06.433004 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:34:06.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.434570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:06.434876 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:06.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.436244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:06.436678 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:06.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.439699 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:06.441293 systemd[1]: Starting systemd-update-done.service... May 17 00:34:06.443133 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:06.443417 systemd[1]: Finished modprobe@loop.service. May 17 00:34:06.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.445114 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:06.446327 systemd[1]: Finished systemd-update-utmp.service. May 17 00:34:06.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.449579 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:06.450650 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:06.452685 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:06.456600 systemd[1]: Starting modprobe@loop.service... May 17 00:34:06.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.457470 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:06.457587 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:06.457693 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:34:06.458604 systemd[1]: Finished systemd-update-done.service. May 17 00:34:06.459870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:06.460037 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:06.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.461595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:06.461719 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:06.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.463240 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:06.463398 systemd[1]: Finished modprobe@loop.service. May 17 00:34:06.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.464798 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:06.464877 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:06.467355 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:34:06.468527 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:34:06.470278 systemd[1]: Starting modprobe@drm.service... May 17 00:34:06.472090 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:34:06.474816 systemd[1]: Starting modprobe@loop.service... May 17 00:34:06.476014 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:34:06.476184 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:06.478097 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:34:06.479611 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:34:06.480972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:34:06.481118 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:34:06.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.484079 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:34:06.484235 systemd[1]: Finished modprobe@drm.service. May 17 00:34:06.485435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:34:06.486066 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:34:06.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:06.488000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:34:06.488000 audit[1266]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe04a1bbb0 a2=420 a3=0 items=0 ppid=1218 pid=1266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:06.488000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:34:06.488499 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:34:06.493539 augenrules[1266]: No rules May 17 00:34:06.488717 systemd[1]: Finished modprobe@loop.service. May 17 00:34:06.490507 systemd[1]: Finished audit-rules.service. May 17 00:34:06.491746 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:34:06.491875 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:34:06.493673 systemd[1]: Finished ensure-sysext.service. May 17 00:34:06.514998 systemd[1]: Started systemd-timesyncd.service. May 17 00:34:07.366172 systemd-timesyncd[1226]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:34:07.366217 systemd-timesyncd[1226]: Initial clock synchronization to Sat 2025-05-17 00:34:07.366059 UTC. May 17 00:34:07.366390 systemd-resolved[1225]: Positive Trust Anchors: May 17 00:34:07.366399 systemd-resolved[1225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:34:07.366419 systemd[1]: Reached target time-set.target. May 17 00:34:07.366426 systemd-resolved[1225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:34:07.374079 systemd-resolved[1225]: Defaulting to hostname 'linux'. May 17 00:34:07.375421 systemd[1]: Started systemd-resolved.service. May 17 00:34:07.376358 systemd[1]: Reached target network.target. May 17 00:34:07.377190 systemd[1]: Reached target nss-lookup.target. May 17 00:34:07.378037 systemd[1]: Reached target sysinit.target. May 17 00:34:07.378903 systemd[1]: Started motdgen.path. May 17 00:34:07.379627 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:34:07.380869 systemd[1]: Started logrotate.timer. May 17 00:34:07.381659 systemd[1]: Started mdadm.timer. May 17 00:34:07.382345 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:34:07.383214 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:34:07.383232 systemd[1]: Reached target paths.target. May 17 00:34:07.383990 systemd[1]: Reached target timers.target. May 17 00:34:07.385050 systemd[1]: Listening on dbus.socket. May 17 00:34:07.386961 systemd[1]: Starting docker.socket... May 17 00:34:07.388735 systemd[1]: Listening on sshd.socket. May 17 00:34:07.389581 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:07.389886 systemd[1]: Listening on docker.socket. May 17 00:34:07.390668 systemd[1]: Reached target sockets.target. May 17 00:34:07.391481 systemd[1]: Reached target basic.target. May 17 00:34:07.392406 systemd[1]: System is tainted: cgroupsv1 May 17 00:34:07.392455 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:34:07.392481 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:34:07.393564 systemd[1]: Starting containerd.service... May 17 00:34:07.395522 systemd[1]: Starting dbus.service... May 17 00:34:07.397511 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:34:07.399710 systemd[1]: Starting extend-filesystems.service... May 17 00:34:07.400681 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:34:07.401901 systemd[1]: Starting motdgen.service... May 17 00:34:07.403907 jq[1282]: false May 17 00:34:07.403979 systemd[1]: Starting prepare-helm.service... May 17 00:34:07.406120 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:34:07.407940 systemd[1]: Starting sshd-keygen.service... May 17 00:34:07.410421 systemd[1]: Starting systemd-logind.service... May 17 00:34:07.411765 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:34:07.411845 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:34:07.412873 systemd[1]: Starting update-engine.service... May 17 00:34:07.414744 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:34:07.417724 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:34:07.436206 extend-filesystems[1283]: Found loop1 May 17 00:34:07.436206 extend-filesystems[1283]: Found sr0 May 17 00:34:07.436206 extend-filesystems[1283]: Found vda May 17 00:34:07.436206 extend-filesystems[1283]: Found vda1 May 17 00:34:07.436206 extend-filesystems[1283]: Found vda2 May 17 00:34:07.436206 extend-filesystems[1283]: Found vda3 May 17 00:34:07.436206 extend-filesystems[1283]: Found usr May 17 00:34:07.436206 extend-filesystems[1283]: Found vda4 May 17 00:34:07.436206 extend-filesystems[1283]: Found vda6 May 17 00:34:07.436206 extend-filesystems[1283]: Found vda7 May 17 00:34:07.436206 extend-filesystems[1283]: Found vda9 May 17 00:34:07.436206 extend-filesystems[1283]: Checking size of /dev/vda9 May 17 00:34:07.419902 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:34:07.455594 jq[1297]: true May 17 00:34:07.455667 tar[1306]: linux-amd64/helm May 17 00:34:07.420915 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:34:07.421164 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:34:07.456343 jq[1310]: true May 17 00:34:07.437392 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:34:07.437616 systemd[1]: Finished motdgen.service. May 17 00:34:07.470149 bash[1330]: Updated "/home/core/.ssh/authorized_keys" May 17 00:34:07.470453 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:34:07.472285 dbus-daemon[1281]: [system] SELinux support is enabled May 17 00:34:07.472428 systemd[1]: Started dbus.service. May 17 00:34:07.474732 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:34:07.474747 systemd[1]: Reached target system-config.target. May 17 00:34:07.475940 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:34:07.475952 systemd[1]: Reached target user-config.target. May 17 00:34:07.523456 extend-filesystems[1283]: Resized partition /dev/vda9 May 17 00:34:07.571420 systemd-logind[1292]: Watching system buttons on /dev/input/event2 (Power Button) May 17 00:34:07.572215 update_engine[1295]: I0517 00:34:07.572000 1295 main.cc:92] Flatcar Update Engine starting May 17 00:34:07.572428 systemd-logind[1292]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:34:07.572639 systemd-logind[1292]: New seat seat0. May 17 00:34:07.573026 extend-filesystems[1339]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:34:07.574529 systemd[1]: Started systemd-logind.service. May 17 00:34:07.578436 systemd[1]: Started update-engine.service. May 17 00:34:07.583047 update_engine[1295]: I0517 00:34:07.577842 1295 update_check_scheduler.cc:74] Next update check in 4m7s May 17 00:34:07.580957 systemd[1]: Started locksmithd.service. May 17 00:34:07.584176 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:34:07.610096 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:34:07.675044 extend-filesystems[1339]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:34:07.675044 extend-filesystems[1339]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:34:07.675044 extend-filesystems[1339]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:34:07.680309 extend-filesystems[1283]: Resized filesystem in /dev/vda9 May 17 00:34:07.681307 env[1307]: time="2025-05-17T00:34:07.676948014Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:34:07.682523 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:34:07.682842 systemd[1]: Finished extend-filesystems.service. May 17 00:34:07.709190 env[1307]: time="2025-05-17T00:34:07.709060531Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:34:07.709495 env[1307]: time="2025-05-17T00:34:07.709478055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:07.710978 env[1307]: time="2025-05-17T00:34:07.710955797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:07.711056 env[1307]: time="2025-05-17T00:34:07.711036339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:07.711407 env[1307]: time="2025-05-17T00:34:07.711382107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:07.711497 env[1307]: time="2025-05-17T00:34:07.711475132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:34:07.711593 env[1307]: time="2025-05-17T00:34:07.711569028Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:34:07.711677 env[1307]: time="2025-05-17T00:34:07.711655140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:34:07.711841 env[1307]: time="2025-05-17T00:34:07.711820620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:07.712240 env[1307]: time="2025-05-17T00:34:07.712218787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:34:07.712532 env[1307]: time="2025-05-17T00:34:07.712507259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:34:07.712618 env[1307]: time="2025-05-17T00:34:07.712598229Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:34:07.712746 env[1307]: time="2025-05-17T00:34:07.712723004Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:34:07.712852 env[1307]: time="2025-05-17T00:34:07.712829844Z" level=info msg="metadata content store policy set" policy=shared May 17 00:34:07.819052 locksmithd[1340]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.881552119Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.881669830Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.881689006Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.881943243Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.881982968Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882010379Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882026329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882041457Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882082755Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882099546Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882117781Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882153508Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882368181Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:34:07.884144 env[1307]: time="2025-05-17T00:34:07.882485240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883022939Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883088121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883105173Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883180795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883200011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883228325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883241529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883255656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883294429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883309757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883323032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883337780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883533307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883551501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884589 env[1307]: time="2025-05-17T00:34:07.883566870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:34:07.884988 env[1307]: time="2025-05-17T00:34:07.883582519Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:34:07.884988 env[1307]: time="2025-05-17T00:34:07.883614028Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:34:07.884988 env[1307]: time="2025-05-17T00:34:07.883626862Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:34:07.884988 env[1307]: time="2025-05-17T00:34:07.883663531Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:34:07.884988 env[1307]: time="2025-05-17T00:34:07.883728874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:34:07.886181 env[1307]: time="2025-05-17T00:34:07.884082547Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:34:07.886181 env[1307]: time="2025-05-17T00:34:07.885148397Z" level=info msg="Connect containerd service" May 17 00:34:07.886181 env[1307]: time="2025-05-17T00:34:07.885206997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:34:07.886181 env[1307]: time="2025-05-17T00:34:07.886030002Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:34:07.889107 env[1307]: time="2025-05-17T00:34:07.887037172Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:34:07.889107 env[1307]: time="2025-05-17T00:34:07.887096132Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:34:07.889107 env[1307]: time="2025-05-17T00:34:07.887144453Z" level=info msg="containerd successfully booted in 0.274243s" May 17 00:34:07.887304 systemd[1]: Started containerd.service. May 17 00:34:07.891090 env[1307]: time="2025-05-17T00:34:07.890554071Z" level=info msg="Start subscribing containerd event" May 17 00:34:07.891090 env[1307]: time="2025-05-17T00:34:07.890674807Z" level=info msg="Start recovering state" May 17 00:34:07.891090 env[1307]: time="2025-05-17T00:34:07.890760097Z" level=info msg="Start event monitor" May 17 00:34:07.891090 env[1307]: time="2025-05-17T00:34:07.890788941Z" level=info msg="Start snapshots syncer" May 17 00:34:07.891090 env[1307]: time="2025-05-17T00:34:07.890806444Z" level=info msg="Start cni network conf syncer for default" May 17 00:34:07.891090 env[1307]: time="2025-05-17T00:34:07.890826081Z" level=info msg="Start streaming server" May 17 00:34:07.935235 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:07.935290 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:34:07.951206 tar[1306]: linux-amd64/LICENSE May 17 00:34:07.951310 tar[1306]: linux-amd64/README.md May 17 00:34:07.955482 systemd[1]: Finished prepare-helm.service. May 17 00:34:08.243060 sshd_keygen[1322]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:34:08.264034 systemd[1]: Finished sshd-keygen.service. May 17 00:34:08.266882 systemd[1]: Starting issuegen.service... May 17 00:34:08.273135 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:34:08.273366 systemd[1]: Finished issuegen.service. May 17 00:34:08.276172 systemd[1]: Starting systemd-user-sessions.service... May 17 00:34:08.282695 systemd[1]: Finished systemd-user-sessions.service. May 17 00:34:08.285385 systemd[1]: Started getty@tty1.service. May 17 00:34:08.287472 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:34:08.288794 systemd[1]: Reached target getty.target. May 17 00:34:08.317255 systemd-networkd[1082]: eth0: Gained IPv6LL May 17 00:34:08.319212 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:34:08.320701 systemd[1]: Reached target network-online.target. May 17 00:34:08.323319 systemd[1]: Starting kubelet.service... May 17 00:34:09.305945 systemd[1]: Started kubelet.service. May 17 00:34:09.307563 systemd[1]: Reached target multi-user.target. May 17 00:34:09.309841 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:34:09.315795 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:34:09.316019 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:34:09.318697 systemd[1]: Startup finished in 5.732s (kernel) + 7.685s (userspace) = 13.418s. May 17 00:34:09.740318 systemd[1]: Created slice system-sshd.slice. May 17 00:34:09.741352 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:41102.service. May 17 00:34:09.783445 sshd[1389]: Accepted publickey for core from 10.0.0.1 port 41102 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:09.785498 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:09.793034 systemd[1]: Created slice user-500.slice. May 17 00:34:09.793908 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:34:09.795771 systemd-logind[1292]: New session 1 of user core. May 17 00:34:09.806149 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:34:09.807822 systemd[1]: Starting user@500.service... May 17 00:34:09.811448 (systemd)[1395]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:09.913416 systemd[1395]: Queued start job for default target default.target. May 17 00:34:09.913628 systemd[1395]: Reached target paths.target. May 17 00:34:09.913643 systemd[1395]: Reached target sockets.target. May 17 00:34:09.913654 systemd[1395]: Reached target timers.target. May 17 00:34:09.913665 systemd[1395]: Reached target basic.target. May 17 00:34:09.913809 systemd[1]: Started user@500.service. May 17 00:34:09.914710 systemd[1]: Started session-1.scope. May 17 00:34:09.914953 systemd[1395]: Reached target default.target. May 17 00:34:09.915134 systemd[1395]: Startup finished in 74ms. May 17 00:34:09.943857 kubelet[1381]: E0517 00:34:09.943794 1381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:09.945427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:09.945603 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:09.965219 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:41114.service. May 17 00:34:10.001232 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 41114 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:10.002116 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:10.005859 systemd-logind[1292]: New session 2 of user core. May 17 00:34:10.007061 systemd[1]: Started session-2.scope. May 17 00:34:10.060334 sshd[1405]: pam_unix(sshd:session): session closed for user core May 17 00:34:10.063278 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:41124.service. May 17 00:34:10.063833 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:41114.service: Deactivated successfully. May 17 00:34:10.064781 systemd-logind[1292]: Session 2 logged out. Waiting for processes to exit. May 17 00:34:10.064807 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:34:10.065946 systemd-logind[1292]: Removed session 2. May 17 00:34:10.099015 sshd[1411]: Accepted publickey for core from 10.0.0.1 port 41124 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:10.100007 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:10.103651 systemd-logind[1292]: New session 3 of user core. May 17 00:34:10.104397 systemd[1]: Started session-3.scope. May 17 00:34:10.155247 sshd[1411]: pam_unix(sshd:session): session closed for user core May 17 00:34:10.159256 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:41136.service. May 17 00:34:10.159672 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:41124.service: Deactivated successfully. May 17 00:34:10.160453 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:34:10.160477 systemd-logind[1292]: Session 3 logged out. Waiting for processes to exit. May 17 00:34:10.161452 systemd-logind[1292]: Removed session 3. May 17 00:34:10.196474 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 41136 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:10.197596 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:10.200813 systemd-logind[1292]: New session 4 of user core. May 17 00:34:10.201553 systemd[1]: Started session-4.scope. May 17 00:34:10.256023 sshd[1418]: pam_unix(sshd:session): session closed for user core May 17 00:34:10.258878 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:41142.service. May 17 00:34:10.259711 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:41136.service: Deactivated successfully. May 17 00:34:10.260617 systemd-logind[1292]: Session 4 logged out. Waiting for processes to exit. May 17 00:34:10.260674 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:34:10.261675 systemd-logind[1292]: Removed session 4. May 17 00:34:10.297898 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 41142 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:10.299121 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:10.302546 systemd-logind[1292]: New session 5 of user core. May 17 00:34:10.303340 systemd[1]: Started session-5.scope. May 17 00:34:10.360908 sudo[1430]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:34:10.361116 sudo[1430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:34:10.371354 dbus-daemon[1281]: Э\xf1\xf3\u000dV: received setenforce notice (enforcing=-1747496416) May 17 00:34:10.374188 sudo[1430]: pam_unix(sudo:session): session closed for user root May 17 00:34:10.375898 sshd[1424]: pam_unix(sshd:session): session closed for user core May 17 00:34:10.378582 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:41152.service. May 17 00:34:10.379082 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:41142.service: Deactivated successfully. May 17 00:34:10.379917 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:34:10.380502 systemd-logind[1292]: Session 5 logged out. Waiting for processes to exit. May 17 00:34:10.381449 systemd-logind[1292]: Removed session 5. May 17 00:34:10.415744 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 41152 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:10.417168 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:10.420674 systemd-logind[1292]: New session 6 of user core. May 17 00:34:10.421410 systemd[1]: Started session-6.scope. May 17 00:34:10.474238 sudo[1439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:34:10.474428 sudo[1439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:34:10.477341 sudo[1439]: pam_unix(sudo:session): session closed for user root May 17 00:34:10.481544 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:34:10.481740 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:34:10.490420 systemd[1]: Stopping audit-rules.service... May 17 00:34:10.491000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:34:10.491000 audit[1442]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc480b8f10 a2=420 a3=0 items=0 ppid=1 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:10.491000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 00:34:10.491921 auditctl[1442]: No rules May 17 00:34:10.492165 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:34:10.492355 systemd[1]: Stopped audit-rules.service. May 17 00:34:10.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.493659 systemd[1]: Starting audit-rules.service... May 17 00:34:10.508820 augenrules[1460]: No rules May 17 00:34:10.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.510000 audit[1438]: USER_END pid=1438 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:10.510000 audit[1438]: CRED_DISP pid=1438 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:10.509431 systemd[1]: Finished audit-rules.service. May 17 00:34:10.510426 sudo[1438]: pam_unix(sudo:session): session closed for user root May 17 00:34:10.512000 audit[1432]: USER_END pid=1432 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:10.512000 audit[1432]: CRED_DISP pid=1432 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:10.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:41160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.116:22-10.0.0.1:41152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:10.511568 sshd[1432]: pam_unix(sshd:session): session closed for user core May 17 00:34:10.513993 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:41160.service. May 17 00:34:10.514495 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:41152.service: Deactivated successfully. May 17 00:34:10.515260 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:34:10.515849 systemd-logind[1292]: Session 6 logged out. Waiting for processes to exit. May 17 00:34:10.516882 systemd-logind[1292]: Removed session 6. May 17 00:34:10.566000 audit[1466]: USER_ACCT pid=1466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:10.567120 sshd[1466]: Accepted publickey for core from 10.0.0.1 port 41160 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:34:10.567000 audit[1466]: CRED_ACQ pid=1466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:10.568000 audit[1466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61f38370 a2=3 a3=0 items=0 ppid=1 pid=1466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:10.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:34:10.568330 sshd[1466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:34:10.572037 systemd-logind[1292]: New session 7 of user core. May 17 00:34:10.572671 systemd[1]: Started session-7.scope. May 17 00:34:10.576000 audit[1466]: USER_START pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:10.577000 audit[1470]: CRED_ACQ pid=1470 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:10.625000 audit[1471]: USER_ACCT pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:10.625880 sudo[1471]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:34:10.625000 audit[1471]: CRED_REFR pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:10.626089 sudo[1471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:34:10.628000 audit[1471]: USER_START pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:10.667773 systemd[1]: Starting docker.service... May 17 00:34:10.779435 env[1482]: time="2025-05-17T00:34:10.779309493Z" level=info msg="Starting up" May 17 00:34:10.780706 env[1482]: time="2025-05-17T00:34:10.780683912Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:34:10.780706 env[1482]: time="2025-05-17T00:34:10.780699902Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:34:10.780780 env[1482]: time="2025-05-17T00:34:10.780726863Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:34:10.780780 env[1482]: time="2025-05-17T00:34:10.780736391Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:34:10.782933 env[1482]: time="2025-05-17T00:34:10.782917884Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:34:10.782933 env[1482]: time="2025-05-17T00:34:10.782930688Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:34:10.783010 env[1482]: time="2025-05-17T00:34:10.782940597Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:34:10.783010 env[1482]: time="2025-05-17T00:34:10.782949233Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:34:12.675163 env[1482]: time="2025-05-17T00:34:12.675112782Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:34:12.675163 env[1482]: time="2025-05-17T00:34:12.675145463Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:34:12.675538 env[1482]: time="2025-05-17T00:34:12.675373351Z" level=info msg="Loading containers: start." May 17 00:34:12.727000 audit[1516]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.728935 kernel: kauditd_printk_skb: 171 callbacks suppressed May 17 00:34:12.728989 kernel: audit: type=1325 audit(1747442052.727:174): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.727000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff16d3ae90 a2=0 a3=7fff16d3ae7c items=0 ppid=1482 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.736191 kernel: audit: type=1300 audit(1747442052.727:174): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff16d3ae90 a2=0 a3=7fff16d3ae7c items=0 ppid=1482 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.736242 kernel: audit: type=1327 audit(1747442052.727:174): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:34:12.727000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:34:12.729000 audit[1518]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.740387 kernel: audit: type=1325 audit(1747442052.729:175): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.740415 kernel: audit: type=1300 audit(1747442052.729:175): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbf71d0f0 a2=0 a3=7ffdbf71d0dc items=0 ppid=1482 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.729000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbf71d0f0 a2=0 a3=7ffdbf71d0dc items=0 ppid=1482 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.744997 kernel: audit: type=1327 audit(1747442052.729:175): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:34:12.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:34:12.746971 kernel: audit: type=1325 audit(1747442052.731:176): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.731000 audit[1520]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.749134 kernel: audit: type=1300 audit(1747442052.731:176): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc14a06730 a2=0 a3=7ffc14a0671c items=0 ppid=1482 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.731000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc14a06730 a2=0 a3=7ffc14a0671c items=0 ppid=1482 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.731000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:34:12.756048 kernel: audit: type=1327 audit(1747442052.731:176): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:34:12.756094 kernel: audit: type=1325 audit(1747442052.732:177): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.732000 audit[1522]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.732000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc79131b90 a2=0 a3=7ffc79131b7c items=0 ppid=1482 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.732000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:34:12.735000 audit[1524]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.735000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffef021dc80 a2=0 a3=7ffef021dc6c items=0 ppid=1482 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.735000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 00:34:12.796000 audit[1529]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.796000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf640df20 a2=0 a3=7ffdf640df0c items=0 ppid=1482 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.796000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 00:34:12.813000 audit[1531]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.813000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce4a77490 a2=0 a3=7ffce4a7747c items=0 ppid=1482 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.813000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 00:34:12.815000 audit[1533]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.815000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffee135e7e0 a2=0 a3=7ffee135e7cc items=0 ppid=1482 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.815000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 00:34:12.816000 audit[1535]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.816000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc1f8704e0 a2=0 a3=7ffc1f8704cc items=0 ppid=1482 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.816000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:34:12.827000 audit[1539]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.827000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcb4bae420 a2=0 a3=7ffcb4bae40c items=0 ppid=1482 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.827000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:34:12.834000 audit[1540]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.834000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffed131da0 a2=0 a3=7fffed131d8c items=0 ppid=1482 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.834000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:34:12.844145 kernel: Initializing XFRM netlink socket May 17 00:34:12.888313 env[1482]: time="2025-05-17T00:34:12.888268393Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:34:12.904000 audit[1548]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1548 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.904000 audit[1548]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff4aa99d70 a2=0 a3=7fff4aa99d5c items=0 ppid=1482 pid=1548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.904000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 00:34:12.916000 audit[1551]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.916000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe75e16600 a2=0 a3=7ffe75e165ec items=0 ppid=1482 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.916000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 00:34:12.918000 audit[1554]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.918000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff5e37eda0 a2=0 a3=7fff5e37ed8c items=0 ppid=1482 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.918000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 00:34:12.919000 audit[1556]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.919000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd019366b0 a2=0 a3=7ffd0193669c items=0 ppid=1482 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.919000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 00:34:12.921000 audit[1558]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.921000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffde1bf2680 a2=0 a3=7ffde1bf266c items=0 ppid=1482 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.921000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 00:34:12.923000 audit[1560]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.923000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffec0e245e0 a2=0 a3=7ffec0e245cc items=0 ppid=1482 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.923000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 00:34:12.925000 audit[1562]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.925000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc874b8990 a2=0 a3=7ffc874b897c items=0 ppid=1482 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.925000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 00:34:12.932000 audit[1565]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.932000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd47727180 a2=0 a3=7ffd4772716c items=0 ppid=1482 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.932000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 00:34:12.933000 audit[1567]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.933000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe587d3820 a2=0 a3=7ffe587d380c items=0 ppid=1482 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.933000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:34:12.935000 audit[1569]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.935000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc928c5f70 a2=0 a3=7ffc928c5f5c items=0 ppid=1482 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.935000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:34:12.936000 audit[1571]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:12.936000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe479b3490 a2=0 a3=7ffe479b347c items=0 ppid=1482 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:12.936000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 00:34:12.937373 systemd-networkd[1082]: docker0: Link UP May 17 00:34:13.019000 audit[1575]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:13.019000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb345a650 a2=0 a3=7ffdb345a63c items=0 ppid=1482 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:13.019000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:34:13.025000 audit[1576]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:13.025000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc0cadea50 a2=0 a3=7ffc0cadea3c items=0 ppid=1482 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:13.025000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:34:13.025556 env[1482]: time="2025-05-17T00:34:13.025524424Z" level=info msg="Loading containers: done." May 17 00:34:13.046046 env[1482]: time="2025-05-17T00:34:13.045997090Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:34:13.046241 env[1482]: time="2025-05-17T00:34:13.046203938Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:34:13.046327 env[1482]: time="2025-05-17T00:34:13.046312522Z" level=info msg="Daemon has completed initialization" May 17 00:34:13.066625 systemd[1]: Started docker.service. May 17 00:34:13.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:13.070502 env[1482]: time="2025-05-17T00:34:13.070447640Z" level=info msg="API listen on /run/docker.sock" May 17 00:34:13.944739 env[1307]: time="2025-05-17T00:34:13.944675866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:34:14.611366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501991102.mount: Deactivated successfully. May 17 00:34:16.440314 env[1307]: time="2025-05-17T00:34:16.440253158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:16.442874 env[1307]: time="2025-05-17T00:34:16.442803363Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:16.445849 env[1307]: time="2025-05-17T00:34:16.445808963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:16.447999 env[1307]: time="2025-05-17T00:34:16.447943188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:16.448843 env[1307]: time="2025-05-17T00:34:16.448800396Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:34:16.449686 env[1307]: time="2025-05-17T00:34:16.449657905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:34:19.699138 env[1307]: time="2025-05-17T00:34:19.699047131Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:19.700832 env[1307]: time="2025-05-17T00:34:19.700788128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:19.704845 env[1307]: time="2025-05-17T00:34:19.704810365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:19.705968 env[1307]: time="2025-05-17T00:34:19.705899078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:34:19.706549 env[1307]: time="2025-05-17T00:34:19.706527047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:34:19.706994 env[1307]: time="2025-05-17T00:34:19.706957945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:20.042995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:34:20.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:20.043236 systemd[1]: Stopped kubelet.service. May 17 00:34:20.044113 kernel: kauditd_printk_skb: 63 callbacks suppressed May 17 00:34:20.044154 kernel: audit: type=1130 audit(1747442060.043:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:20.044832 systemd[1]: Starting kubelet.service... May 17 00:34:20.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:20.050262 kernel: audit: type=1131 audit(1747442060.043:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:20.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:20.203217 systemd[1]: Started kubelet.service. May 17 00:34:20.208092 kernel: audit: type=1130 audit(1747442060.203:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:20.493191 kubelet[1624]: E0517 00:34:20.493025 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:20.496668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:20.496816 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:20.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:34:20.501099 kernel: audit: type=1131 audit(1747442060.497:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:34:21.838853 env[1307]: time="2025-05-17T00:34:21.838773865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:21.850734 env[1307]: time="2025-05-17T00:34:21.850653757Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:21.855214 env[1307]: time="2025-05-17T00:34:21.855179889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:21.866974 env[1307]: time="2025-05-17T00:34:21.866925239Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:21.867798 env[1307]: time="2025-05-17T00:34:21.867763211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:34:21.868331 env[1307]: time="2025-05-17T00:34:21.868290120Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:34:23.900559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224700200.mount: Deactivated successfully. May 17 00:34:25.159479 env[1307]: time="2025-05-17T00:34:25.159411954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:25.161255 env[1307]: time="2025-05-17T00:34:25.161226719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:25.162789 env[1307]: time="2025-05-17T00:34:25.162755227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:25.164104 env[1307]: time="2025-05-17T00:34:25.164047242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:25.164426 env[1307]: time="2025-05-17T00:34:25.164381970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:34:25.165019 env[1307]: time="2025-05-17T00:34:25.164972398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:34:25.717021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788773629.mount: Deactivated successfully. May 17 00:34:27.125833 env[1307]: time="2025-05-17T00:34:27.125755006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.127592 env[1307]: time="2025-05-17T00:34:27.127523925Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.129384 env[1307]: time="2025-05-17T00:34:27.129359760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.132280 env[1307]: time="2025-05-17T00:34:27.132226769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.193665 env[1307]: time="2025-05-17T00:34:27.193598800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:34:27.194217 env[1307]: time="2025-05-17T00:34:27.194198114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:34:27.646756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220316793.mount: Deactivated successfully. May 17 00:34:27.651361 env[1307]: time="2025-05-17T00:34:27.651298059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.653285 env[1307]: time="2025-05-17T00:34:27.653248820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.654758 env[1307]: time="2025-05-17T00:34:27.654722074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.656196 env[1307]: time="2025-05-17T00:34:27.656169240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:27.656673 env[1307]: time="2025-05-17T00:34:27.656637789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:34:27.657187 env[1307]: time="2025-05-17T00:34:27.657145812Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:34:28.291412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878659403.mount: Deactivated successfully. May 17 00:34:30.542837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:34:30.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:30.543095 systemd[1]: Stopped kubelet.service. May 17 00:34:30.544538 systemd[1]: Starting kubelet.service... May 17 00:34:30.549646 kernel: audit: type=1130 audit(1747442070.543:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:30.549813 kernel: audit: type=1131 audit(1747442070.543:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:30.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:30.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:30.674789 systemd[1]: Started kubelet.service. May 17 00:34:30.679149 kernel: audit: type=1130 audit(1747442070.674:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:30.742515 kubelet[1640]: E0517 00:34:30.742454 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:34:30.744236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:34:30.744434 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:34:30.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:34:30.748100 kernel: audit: type=1131 audit(1747442070.742:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:34:32.032543 env[1307]: time="2025-05-17T00:34:32.032476196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:32.034596 env[1307]: time="2025-05-17T00:34:32.034528778Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:32.037208 env[1307]: time="2025-05-17T00:34:32.037161277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:32.039134 env[1307]: time="2025-05-17T00:34:32.039058637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:32.039840 env[1307]: time="2025-05-17T00:34:32.039797013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:34:34.919846 systemd[1]: Stopped kubelet.service. May 17 00:34:34.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:34.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:34.923586 systemd[1]: Starting kubelet.service... May 17 00:34:34.926751 kernel: audit: type=1130 audit(1747442074.919:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:34.926821 kernel: audit: type=1131 audit(1747442074.921:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:34.942543 systemd[1]: Reloading. May 17 00:34:35.002984 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2025-05-17T00:34:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:34:35.003354 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2025-05-17T00:34:35Z" level=info msg="torcx already run" May 17 00:34:35.160192 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:35.160208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:35.178746 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:35.245615 systemd[1]: Started kubelet.service. May 17 00:34:35.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:35.247044 systemd[1]: Stopping kubelet.service... May 17 00:34:35.247319 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:34:35.247504 systemd[1]: Stopped kubelet.service. May 17 00:34:35.248765 systemd[1]: Starting kubelet.service... May 17 00:34:35.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:35.253087 kernel: audit: type=1130 audit(1747442075.245:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:35.253141 kernel: audit: type=1131 audit(1747442075.247:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:35.338124 systemd[1]: Started kubelet.service. May 17 00:34:35.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:35.343084 kernel: audit: type=1130 audit(1747442075.338:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:35.434435 kubelet[1758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:35.434873 kubelet[1758]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:34:35.434873 kubelet[1758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:35.435133 kubelet[1758]: I0517 00:34:35.435084 1758 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:34:35.632056 kubelet[1758]: I0517 00:34:35.632001 1758 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:34:35.632056 kubelet[1758]: I0517 00:34:35.632038 1758 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:34:35.632345 kubelet[1758]: I0517 00:34:35.632321 1758 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:34:35.664774 kubelet[1758]: E0517 00:34:35.664709 1758 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:35.665916 kubelet[1758]: I0517 00:34:35.665892 1758 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:34:35.673916 kubelet[1758]: E0517 00:34:35.673880 1758 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:34:35.673916 kubelet[1758]: I0517 00:34:35.673917 1758 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:34:35.681118 kubelet[1758]: I0517 00:34:35.681091 1758 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:34:35.682135 kubelet[1758]: I0517 00:34:35.682109 1758 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:34:35.682273 kubelet[1758]: I0517 00:34:35.682234 1758 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:34:35.682468 kubelet[1758]: I0517 00:34:35.682267 1758 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:34:35.682581 kubelet[1758]: I0517 00:34:35.682475 1758 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:34:35.682581 kubelet[1758]: I0517 00:34:35.682486 1758 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:34:35.682635 kubelet[1758]: I0517 00:34:35.682621 1758 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:35.695771 kubelet[1758]: I0517 00:34:35.695208 1758 kubelet.go:408] "Attempting to sync node with API server" May 17 00:34:35.695771 kubelet[1758]: I0517 00:34:35.695241 1758 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:34:35.695771 kubelet[1758]: I0517 00:34:35.695284 1758 kubelet.go:314] "Adding apiserver pod source" May 17 00:34:35.695771 kubelet[1758]: I0517 00:34:35.695303 1758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:34:35.699668 kubelet[1758]: W0517 00:34:35.699593 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:35.699799 kubelet[1758]: E0517 00:34:35.699699 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:35.699910 kubelet[1758]: W0517 00:34:35.699877 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:35.699938 kubelet[1758]: E0517 00:34:35.699916 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:35.701506 kubelet[1758]: I0517 00:34:35.701487 1758 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:34:35.702156 kubelet[1758]: I0517 00:34:35.702123 1758 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:34:35.703771 kubelet[1758]: W0517 00:34:35.703751 1758 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:34:35.706233 kubelet[1758]: I0517 00:34:35.706209 1758 server.go:1274] "Started kubelet" May 17 00:34:35.706300 kubelet[1758]: I0517 00:34:35.706274 1758 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:34:35.706920 kubelet[1758]: I0517 00:34:35.706866 1758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:34:35.707830 kubelet[1758]: I0517 00:34:35.707803 1758 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:34:35.708000 audit[1758]: AVC avc: denied { mac_admin } for pid=1758 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:35.708732 kubelet[1758]: I0517 00:34:35.708559 1758 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:34:35.708732 kubelet[1758]: I0517 00:34:35.708621 1758 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:34:35.708880 kubelet[1758]: I0517 00:34:35.708756 1758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:34:35.709565 kubelet[1758]: I0517 00:34:35.709540 1758 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:34:35.710721 kubelet[1758]: I0517 00:34:35.710701 1758 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:34:35.711386 kubelet[1758]: I0517 00:34:35.711370 1758 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:34:35.711433 kubelet[1758]: I0517 00:34:35.711410 1758 reconciler.go:26] "Reconciler: start to sync state" May 17 00:34:35.711780 kubelet[1758]: I0517 00:34:35.711764 1758 server.go:449] "Adding debug handlers to kubelet server" May 17 00:34:35.714101 kernel: audit: type=1400 audit(1747442075.708:212): avc: denied { mac_admin } for pid=1758 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:35.714173 kernel: audit: type=1401 audit(1747442075.708:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:35.714222 kernel: audit: type=1300 audit(1747442075.708:212): arch=c000003e syscall=188 success=no exit=-22 a0=c0007feb10 a1=c000159f08 a2=c0007feae0 a3=25 items=0 ppid=1 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.708000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:35.708000 audit[1758]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007feb10 a1=c000159f08 a2=c0007feae0 a3=25 items=0 ppid=1 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.714403 kubelet[1758]: E0517 00:34:35.712562 1758 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:35.718742 kernel: audit: type=1327 audit(1747442075.708:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:35.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:35.722924 kubelet[1758]: E0517 00:34:35.722871 1758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" May 17 00:34:35.708000 audit[1758]: AVC avc: denied { mac_admin } for pid=1758 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:35.726684 kernel: audit: type=1400 audit(1747442075.708:213): avc: denied { mac_admin } for pid=1758 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:35.728556 kernel: audit: type=1401 audit(1747442075.708:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:35.708000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:35.733481 kernel: audit: type=1300 audit(1747442075.708:213): arch=c000003e syscall=188 success=no exit=-22 a0=c000359aa0 a1=c000159f20 a2=c0007feba0 a3=25 items=0 ppid=1 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.708000 audit[1758]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000359aa0 a1=c000159f20 a2=c0007feba0 a3=25 items=0 ppid=1 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.738130 kernel: audit: type=1327 audit(1747442075.708:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:35.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:35.740573 kernel: audit: type=1325 audit(1747442075.714:214): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.714000 audit[1771]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.740648 kubelet[1758]: W0517 00:34:35.722781 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:35.740648 kubelet[1758]: E0517 00:34:35.738668 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:35.740648 kubelet[1758]: I0517 00:34:35.738797 1758 factory.go:221] Registration of the systemd container factory successfully May 17 00:34:35.740648 kubelet[1758]: I0517 00:34:35.738949 1758 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:34:35.714000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd55123140 a2=0 a3=7ffd5512312c items=0 ppid=1758 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.745407 kernel: audit: type=1300 audit(1747442075.714:214): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd55123140 a2=0 a3=7ffd5512312c items=0 ppid=1758 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:34:35.715000 audit[1772]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.715000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7713e0d0 a2=0 a3=7ffe7713e0bc items=0 ppid=1758 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:34:35.737000 audit[1774]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.737000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1a625040 a2=0 a3=7ffc1a62502c items=0 ppid=1758 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:34:35.746687 kubelet[1758]: E0517 00:34:35.745437 1758 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1840294f3c09b7fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:34:35.706177533 +0000 UTC m=+0.362971531,LastTimestamp:2025-05-17 00:34:35.706177533 +0000 UTC m=+0.362971531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:34:35.747000 audit[1776]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.747000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc0ba74570 a2=0 a3=7ffc0ba7455c items=0 ppid=1758 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:34:35.747693 kubelet[1758]: I0517 00:34:35.747669 1758 factory.go:221] Registration of the containerd container factory successfully May 17 00:34:35.752000 audit[1779]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.752000 audit[1779]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff10114a50 a2=0 a3=7fff10114a3c items=0 ppid=1758 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 00:34:35.753174 kubelet[1758]: I0517 00:34:35.753058 1758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:34:35.753000 audit[1780]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:35.753000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff5a7620c0 a2=0 a3=7fff5a7620ac items=0 ppid=1758 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:34:35.753929 kubelet[1758]: I0517 00:34:35.753878 1758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:34:35.753929 kubelet[1758]: I0517 00:34:35.753904 1758 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:34:35.753983 kubelet[1758]: I0517 00:34:35.753932 1758 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:34:35.754006 kubelet[1758]: E0517 00:34:35.753983 1758 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:34:35.754000 audit[1782]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.754000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb763abc0 a2=0 a3=7ffdb763abac items=0 ppid=1758 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:34:35.755000 audit[1783]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.755000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0fb475b0 a2=0 a3=7fff0fb4759c items=0 ppid=1758 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:34:35.756000 audit[1784]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:35.756000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe68a8ee30 a2=0 a3=7ffe68a8ee1c items=0 ppid=1758 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:34:35.757000 audit[1785]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1785 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:35.757000 audit[1785]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc0424560 a2=0 a3=7ffcc042454c items=0 ppid=1758 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:34:35.758000 audit[1786]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:35.758000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc317698a0 a2=0 a3=7ffc3176988c items=0 ppid=1758 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:34:35.758000 audit[1787]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:35.758000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff87c797c0 a2=0 a3=7fff87c797ac items=0 ppid=1758 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:35.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:34:35.760420 kubelet[1758]: E0517 00:34:35.760376 1758 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:34:35.768118 kubelet[1758]: W0517 00:34:35.768041 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:35.768285 kubelet[1758]: E0517 00:34:35.768263 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:35.773038 kubelet[1758]: I0517 00:34:35.773012 1758 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:34:35.773038 kubelet[1758]: I0517 00:34:35.773025 1758 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:34:35.773131 kubelet[1758]: I0517 00:34:35.773041 1758 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:35.813617 kubelet[1758]: E0517 00:34:35.813568 1758 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:35.855012 kubelet[1758]: E0517 00:34:35.854934 1758 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:34:35.914136 kubelet[1758]: E0517 00:34:35.914086 1758 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:35.923759 kubelet[1758]: E0517 00:34:35.923724 1758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" May 17 00:34:36.015205 kubelet[1758]: E0517 00:34:36.015108 1758 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:36.055413 kubelet[1758]: E0517 00:34:36.055350 1758 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:34:36.115911 kubelet[1758]: E0517 00:34:36.115858 1758 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:36.158154 kubelet[1758]: I0517 00:34:36.158128 1758 policy_none.go:49] "None policy: Start" May 17 00:34:36.159130 kubelet[1758]: I0517 00:34:36.159093 1758 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:34:36.159130 kubelet[1758]: I0517 00:34:36.159118 1758 state_mem.go:35] "Initializing new in-memory state store" May 17 00:34:36.164832 kubelet[1758]: I0517 00:34:36.164800 1758 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:34:36.164000 audit[1758]: AVC avc: denied { mac_admin } for pid=1758 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:36.164000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:36.164000 audit[1758]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011537d0 a1=c001154498 a2=c0011537a0 a3=25 items=0 ppid=1 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:36.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:36.165190 kubelet[1758]: I0517 00:34:36.164892 1758 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:34:36.165190 kubelet[1758]: I0517 00:34:36.165103 1758 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:34:36.165190 kubelet[1758]: I0517 00:34:36.165119 1758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:34:36.165611 kubelet[1758]: I0517 00:34:36.165582 1758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:34:36.166249 kubelet[1758]: E0517 00:34:36.166222 1758 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:34:36.267347 kubelet[1758]: I0517 00:34:36.267242 1758 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:34:36.267655 kubelet[1758]: E0517 00:34:36.267627 1758 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 17 00:34:36.324394 kubelet[1758]: E0517 00:34:36.324362 1758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" May 17 00:34:36.510917 kubelet[1758]: I0517 00:34:36.510882 1758 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:34:36.511389 kubelet[1758]: E0517 00:34:36.511351 1758 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 17 00:34:36.517052 kubelet[1758]: I0517 00:34:36.517002 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:36.517052 kubelet[1758]: I0517 00:34:36.517040 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:36.517270 kubelet[1758]: I0517 00:34:36.517105 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:36.517270 kubelet[1758]: I0517 00:34:36.517143 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f82784620011d998c25f43523c5f25ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f82784620011d998c25f43523c5f25ea\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:36.517270 kubelet[1758]: I0517 00:34:36.517164 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f82784620011d998c25f43523c5f25ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f82784620011d998c25f43523c5f25ea\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:36.517270 kubelet[1758]: I0517 00:34:36.517181 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f82784620011d998c25f43523c5f25ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f82784620011d998c25f43523c5f25ea\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:36.517270 kubelet[1758]: I0517 00:34:36.517194 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:36.517653 kubelet[1758]: I0517 00:34:36.517213 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:36.517653 kubelet[1758]: I0517 00:34:36.517240 1758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:34:36.708926 kubelet[1758]: W0517 00:34:36.708855 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:36.709088 kubelet[1758]: E0517 00:34:36.708930 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:36.739843 kubelet[1758]: W0517 00:34:36.739795 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:36.739930 kubelet[1758]: E0517 00:34:36.739848 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:36.814922 kubelet[1758]: E0517 00:34:36.814880 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:36.815580 kubelet[1758]: E0517 00:34:36.815509 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:36.815845 env[1307]: time="2025-05-17T00:34:36.815788609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 17 00:34:36.816238 env[1307]: time="2025-05-17T00:34:36.815936206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f82784620011d998c25f43523c5f25ea,Namespace:kube-system,Attempt:0,}" May 17 00:34:36.817465 kubelet[1758]: E0517 00:34:36.817416 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:36.817951 env[1307]: time="2025-05-17T00:34:36.817887467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 17 00:34:36.822544 kubelet[1758]: W0517 00:34:36.822472 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:36.822639 kubelet[1758]: E0517 00:34:36.822553 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:36.913200 kubelet[1758]: I0517 00:34:36.913161 1758 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:34:36.913520 kubelet[1758]: E0517 00:34:36.913481 1758 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 17 00:34:37.028941 kubelet[1758]: W0517 00:34:37.028824 1758 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 17 00:34:37.028941 kubelet[1758]: E0517 00:34:37.028930 1758 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:37.125364 kubelet[1758]: E0517 00:34:37.125240 1758 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" May 17 00:34:37.537586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838754520.mount: Deactivated successfully. May 17 00:34:37.714767 kubelet[1758]: I0517 00:34:37.714736 1758 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:34:37.715153 kubelet[1758]: E0517 00:34:37.715128 1758 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 17 00:34:37.755232 kubelet[1758]: E0517 00:34:37.755177 1758 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 17 00:34:37.811536 env[1307]: time="2025-05-17T00:34:37.811475804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.836372 env[1307]: time="2025-05-17T00:34:37.836305846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.840060 env[1307]: time="2025-05-17T00:34:37.839995710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.841541 env[1307]: time="2025-05-17T00:34:37.841478442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.842951 env[1307]: time="2025-05-17T00:34:37.842894048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.853230 env[1307]: time="2025-05-17T00:34:37.853166834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.854798 env[1307]: time="2025-05-17T00:34:37.854765815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.856115 env[1307]: time="2025-05-17T00:34:37.856054132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.857497 env[1307]: time="2025-05-17T00:34:37.857455402Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.859122 env[1307]: time="2025-05-17T00:34:37.859032691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.861304 env[1307]: time="2025-05-17T00:34:37.861273887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.863986 env[1307]: time="2025-05-17T00:34:37.863960688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:37.917181 env[1307]: time="2025-05-17T00:34:37.917125158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:37.917320 env[1307]: time="2025-05-17T00:34:37.917186553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:37.917320 env[1307]: time="2025-05-17T00:34:37.917197293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:37.917362 env[1307]: time="2025-05-17T00:34:37.917331976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42a97b2a05e5c36d623a4b58faa402797944fcdc4c63804e3a210bd9fe01fee2 pid=1802 runtime=io.containerd.runc.v2 May 17 00:34:37.963191 env[1307]: time="2025-05-17T00:34:37.962998075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:37.963367 env[1307]: time="2025-05-17T00:34:37.963288730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:37.963367 env[1307]: time="2025-05-17T00:34:37.963308968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:37.963790 env[1307]: time="2025-05-17T00:34:37.963752761Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2295771c9a064aafe1b82f4c91017eeaac7d8b803c15aad421a21fb8120070e pid=1810 runtime=io.containerd.runc.v2 May 17 00:34:37.970466 env[1307]: time="2025-05-17T00:34:37.970385026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:37.970661 env[1307]: time="2025-05-17T00:34:37.970637790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:37.970768 env[1307]: time="2025-05-17T00:34:37.970745302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:37.971726 env[1307]: time="2025-05-17T00:34:37.971043041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59fad75788978bb29cfcee4def6153e4493e4a9ac0d4c78fde387250ae95ba32 pid=1835 runtime=io.containerd.runc.v2 May 17 00:34:38.116405 env[1307]: time="2025-05-17T00:34:38.116264138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f82784620011d998c25f43523c5f25ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"42a97b2a05e5c36d623a4b58faa402797944fcdc4c63804e3a210bd9fe01fee2\"" May 17 00:34:38.121002 kubelet[1758]: E0517 00:34:38.120964 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:38.123035 env[1307]: time="2025-05-17T00:34:38.122991782Z" level=info msg="CreateContainer within sandbox \"42a97b2a05e5c36d623a4b58faa402797944fcdc4c63804e3a210bd9fe01fee2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:34:38.124431 env[1307]: time="2025-05-17T00:34:38.124392781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"59fad75788978bb29cfcee4def6153e4493e4a9ac0d4c78fde387250ae95ba32\"" May 17 00:34:38.126360 kubelet[1758]: E0517 00:34:38.126332 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:38.128830 env[1307]: time="2025-05-17T00:34:38.128790152Z" level=info msg="CreateContainer within sandbox \"59fad75788978bb29cfcee4def6153e4493e4a9ac0d4c78fde387250ae95ba32\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:34:38.134303 env[1307]: time="2025-05-17T00:34:38.134255878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2295771c9a064aafe1b82f4c91017eeaac7d8b803c15aad421a21fb8120070e\"" May 17 00:34:38.135948 kubelet[1758]: E0517 00:34:38.135708 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:38.138133 env[1307]: time="2025-05-17T00:34:38.138083510Z" level=info msg="CreateContainer within sandbox \"a2295771c9a064aafe1b82f4c91017eeaac7d8b803c15aad421a21fb8120070e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:34:38.142314 env[1307]: time="2025-05-17T00:34:38.142259747Z" level=info msg="CreateContainer within sandbox \"42a97b2a05e5c36d623a4b58faa402797944fcdc4c63804e3a210bd9fe01fee2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f9ac956be5e4548da910c60632aa4cba320c556d660e68182daa8a88ce7ca9f\"" May 17 00:34:38.143501 env[1307]: time="2025-05-17T00:34:38.143477502Z" level=info msg="StartContainer for \"5f9ac956be5e4548da910c60632aa4cba320c556d660e68182daa8a88ce7ca9f\"" May 17 00:34:38.249107 env[1307]: time="2025-05-17T00:34:38.249006259Z" level=info msg="CreateContainer within sandbox \"59fad75788978bb29cfcee4def6153e4493e4a9ac0d4c78fde387250ae95ba32\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a2699baa32e2f4c8d5d0d6d8d43d3a1722765d02bd143e92fde1f0b70caada0\"" May 17 00:34:38.249798 env[1307]: time="2025-05-17T00:34:38.249763590Z" level=info msg="StartContainer for \"4a2699baa32e2f4c8d5d0d6d8d43d3a1722765d02bd143e92fde1f0b70caada0\"" May 17 00:34:38.262314 env[1307]: time="2025-05-17T00:34:38.262252094Z" level=info msg="StartContainer for \"5f9ac956be5e4548da910c60632aa4cba320c556d660e68182daa8a88ce7ca9f\" returns successfully" May 17 00:34:38.268219 env[1307]: time="2025-05-17T00:34:38.268182391Z" level=info msg="CreateContainer within sandbox \"a2295771c9a064aafe1b82f4c91017eeaac7d8b803c15aad421a21fb8120070e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9db0f14449487563610c5186b96d94023e5d8966370397d8d25fc137d05f90e1\"" May 17 00:34:38.268719 env[1307]: time="2025-05-17T00:34:38.268698139Z" level=info msg="StartContainer for \"9db0f14449487563610c5186b96d94023e5d8966370397d8d25fc137d05f90e1\"" May 17 00:34:38.342723 env[1307]: time="2025-05-17T00:34:38.342647881Z" level=info msg="StartContainer for \"4a2699baa32e2f4c8d5d0d6d8d43d3a1722765d02bd143e92fde1f0b70caada0\" returns successfully" May 17 00:34:38.393542 env[1307]: time="2025-05-17T00:34:38.393437904Z" level=info msg="StartContainer for \"9db0f14449487563610c5186b96d94023e5d8966370397d8d25fc137d05f90e1\" returns successfully" May 17 00:34:38.776684 kubelet[1758]: E0517 00:34:38.776584 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:38.778045 kubelet[1758]: E0517 00:34:38.778021 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:38.779241 kubelet[1758]: E0517 00:34:38.779219 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:39.317224 kubelet[1758]: I0517 00:34:39.317162 1758 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:34:39.792664 kubelet[1758]: E0517 00:34:39.792488 1758 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:39.889959 kubelet[1758]: E0517 00:34:39.889913 1758 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 17 00:34:39.971162 kubelet[1758]: I0517 00:34:39.971123 1758 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:34:40.702960 kubelet[1758]: I0517 00:34:40.702919 1758 apiserver.go:52] "Watching apiserver" May 17 00:34:40.712243 kubelet[1758]: I0517 00:34:40.712210 1758 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:34:42.335475 systemd[1]: Reloading. May 17 00:34:42.410669 /usr/lib/systemd/system-generators/torcx-generator[2051]: time="2025-05-17T00:34:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:34:42.410701 /usr/lib/systemd/system-generators/torcx-generator[2051]: time="2025-05-17T00:34:42Z" level=info msg="torcx already run" May 17 00:34:42.500055 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:34:42.500082 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:34:42.520205 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:34:42.596611 systemd[1]: Stopping kubelet.service... May 17 00:34:42.620447 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:34:42.620693 systemd[1]: Stopped kubelet.service. May 17 00:34:42.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:42.621872 kernel: kauditd_printk_skb: 38 callbacks suppressed May 17 00:34:42.621928 kernel: audit: type=1131 audit(1747442082.619:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:42.622567 systemd[1]: Starting kubelet.service... May 17 00:34:42.774796 systemd[1]: Started kubelet.service. May 17 00:34:42.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:42.779112 kernel: audit: type=1130 audit(1747442082.774:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:42.822156 kubelet[2108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:42.822156 kubelet[2108]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:34:42.822156 kubelet[2108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:34:42.822556 kubelet[2108]: I0517 00:34:42.822191 2108 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:34:42.829351 kubelet[2108]: I0517 00:34:42.829319 2108 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:34:42.829351 kubelet[2108]: I0517 00:34:42.829340 2108 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:34:42.829596 kubelet[2108]: I0517 00:34:42.829575 2108 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:34:42.830698 kubelet[2108]: I0517 00:34:42.830677 2108 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:34:42.832916 kubelet[2108]: I0517 00:34:42.832888 2108 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:34:42.836998 kubelet[2108]: E0517 00:34:42.836951 2108 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:34:42.836998 kubelet[2108]: I0517 00:34:42.836985 2108 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:34:42.840648 kubelet[2108]: I0517 00:34:42.840621 2108 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:34:42.840967 kubelet[2108]: I0517 00:34:42.840944 2108 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:34:42.841105 kubelet[2108]: I0517 00:34:42.841053 2108 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:34:42.841339 kubelet[2108]: I0517 00:34:42.841100 2108 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:34:42.841421 kubelet[2108]: I0517 00:34:42.841346 2108 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:34:42.841421 kubelet[2108]: I0517 00:34:42.841356 2108 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:34:42.841421 kubelet[2108]: I0517 00:34:42.841382 2108 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:42.841486 kubelet[2108]: I0517 00:34:42.841460 2108 kubelet.go:408] "Attempting to sync node with API server" May 17 00:34:42.841486 kubelet[2108]: I0517 00:34:42.841472 2108 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:34:42.841533 kubelet[2108]: I0517 00:34:42.841495 2108 kubelet.go:314] "Adding apiserver pod source" May 17 00:34:42.841533 kubelet[2108]: I0517 00:34:42.841518 2108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:34:42.842157 kubelet[2108]: I0517 00:34:42.842138 2108 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:34:42.842610 kubelet[2108]: I0517 00:34:42.842598 2108 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:34:42.843129 kubelet[2108]: I0517 00:34:42.843116 2108 server.go:1274] "Started kubelet" May 17 00:34:42.843288 kubelet[2108]: I0517 00:34:42.843253 2108 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:34:42.843618 kubelet[2108]: I0517 00:34:42.843573 2108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:34:42.843870 kubelet[2108]: I0517 00:34:42.843857 2108 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:34:42.844713 kubelet[2108]: I0517 00:34:42.844690 2108 server.go:449] "Adding debug handlers to kubelet server" May 17 00:34:42.855980 kernel: audit: type=1400 audit(1747442082.844:229): avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:42.856143 kernel: audit: type=1401 audit(1747442082.844:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:42.856162 kernel: audit: type=1300 audit(1747442082.844:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000a2fe60 a1=c0009203a8 a2=c000a2fe30 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:42.844000 audit[2108]: AVC avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:42.844000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:42.844000 audit[2108]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a2fe60 a1=c0009203a8 a2=c000a2fe30 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.845933 2108 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.845970 2108 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.845993 2108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.846593 2108 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.850138 2108 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:34:42.856395 kubelet[2108]: E0517 00:34:42.850301 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.851555 2108 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.851640 2108 reconciler.go:26] "Reconciler: start to sync state" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.853121 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.853559 2108 factory.go:221] Registration of the systemd container factory successfully May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.853837 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.853856 2108 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.853859 2108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:34:42.856395 kubelet[2108]: I0517 00:34:42.853875 2108 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:34:42.856723 kubelet[2108]: E0517 00:34:42.853914 2108 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:34:42.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:42.859324 kubelet[2108]: I0517 00:34:42.858859 2108 factory.go:221] Registration of the containerd container factory successfully May 17 00:34:42.862108 kernel: audit: type=1327 audit(1747442082.844:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:42.844000 audit[2108]: AVC avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:42.870830 kernel: audit: type=1400 audit(1747442082.844:230): avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:42.870922 kernel: audit: type=1401 audit(1747442082.844:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:42.844000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:42.844000 audit[2108]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000431f60 a1=c0009203c0 a2=c000a2fef0 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:42.876649 kernel: audit: type=1300 audit(1747442082.844:230): arch=c000003e syscall=188 success=no exit=-22 a0=c000431f60 a1=c0009203c0 a2=c000a2fef0 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:42.876744 kernel: audit: type=1327 audit(1747442082.844:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:42.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:42.941218 kubelet[2108]: I0517 00:34:42.941184 2108 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:34:42.941372 kubelet[2108]: I0517 00:34:42.941228 2108 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:34:42.941372 kubelet[2108]: I0517 00:34:42.941247 2108 state_mem.go:36] "Initialized new in-memory state store" May 17 00:34:42.941441 kubelet[2108]: I0517 00:34:42.941418 2108 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:34:42.941488 kubelet[2108]: I0517 00:34:42.941435 2108 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:34:42.941488 kubelet[2108]: I0517 00:34:42.941455 2108 policy_none.go:49] "None policy: Start" May 17 00:34:42.942166 kubelet[2108]: I0517 00:34:42.942147 2108 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:34:42.942259 kubelet[2108]: I0517 00:34:42.942247 2108 state_mem.go:35] "Initializing new in-memory state store" May 17 00:34:42.942509 kubelet[2108]: I0517 00:34:42.942497 2108 state_mem.go:75] "Updated machine memory state" May 17 00:34:42.944078 kubelet[2108]: I0517 00:34:42.944032 2108 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:34:42.942000 audit[2108]: AVC avc: denied { mac_admin } for pid=2108 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:34:42.942000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:34:42.942000 audit[2108]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009b7470 a1=c000f988e8 a2=c0009b7440 a3=25 items=0 ppid=1 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:42.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:34:42.944330 kubelet[2108]: I0517 00:34:42.944123 2108 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:34:42.944330 kubelet[2108]: I0517 00:34:42.944258 2108 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:34:42.944330 kubelet[2108]: I0517 00:34:42.944269 2108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:34:42.944460 kubelet[2108]: I0517 00:34:42.944437 2108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:34:43.048236 kubelet[2108]: I0517 00:34:43.048189 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:34:43.153344 kubelet[2108]: I0517 00:34:43.153186 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f82784620011d998c25f43523c5f25ea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f82784620011d998c25f43523c5f25ea\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:43.153469 kubelet[2108]: I0517 00:34:43.153348 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f82784620011d998c25f43523c5f25ea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f82784620011d998c25f43523c5f25ea\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:43.153469 kubelet[2108]: I0517 00:34:43.153382 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f82784620011d998c25f43523c5f25ea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f82784620011d998c25f43523c5f25ea\") " pod="kube-system/kube-apiserver-localhost" May 17 00:34:43.153469 kubelet[2108]: I0517 00:34:43.153416 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:43.153469 kubelet[2108]: I0517 00:34:43.153436 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:43.153469 kubelet[2108]: I0517 00:34:43.153454 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:43.153594 kubelet[2108]: I0517 00:34:43.153471 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:43.153594 kubelet[2108]: I0517 00:34:43.153492 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:34:43.153594 kubelet[2108]: I0517 00:34:43.153510 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:34:43.319450 kubelet[2108]: E0517 00:34:43.319408 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:43.320451 kubelet[2108]: E0517 00:34:43.320413 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:43.320660 kubelet[2108]: E0517 00:34:43.320634 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:43.334660 kubelet[2108]: I0517 00:34:43.334625 2108 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 17 00:34:43.334756 kubelet[2108]: I0517 00:34:43.334695 2108 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:34:43.842148 kubelet[2108]: I0517 00:34:43.842092 2108 apiserver.go:52] "Watching apiserver" May 17 00:34:43.852230 kubelet[2108]: I0517 00:34:43.852175 2108 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:34:43.913015 kubelet[2108]: E0517 00:34:43.912979 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:43.913231 kubelet[2108]: E0517 00:34:43.913216 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:44.005985 kubelet[2108]: E0517 00:34:44.005942 2108 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:34:44.006322 kubelet[2108]: E0517 00:34:44.006310 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:44.178591 kubelet[2108]: I0517 00:34:44.177983 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.177932248 podStartE2EDuration="1.177932248s" podCreationTimestamp="2025-05-17 00:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:44.006358758 +0000 UTC m=+1.224112861" watchObservedRunningTime="2025-05-17 00:34:44.177932248 +0000 UTC m=+1.395686361" May 17 00:34:44.231020 kubelet[2108]: I0517 00:34:44.230962 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.230927445 podStartE2EDuration="1.230927445s" podCreationTimestamp="2025-05-17 00:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:44.178940552 +0000 UTC m=+1.396694665" watchObservedRunningTime="2025-05-17 00:34:44.230927445 +0000 UTC m=+1.448681558" May 17 00:34:44.355220 kubelet[2108]: I0517 00:34:44.355137 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.355104158 podStartE2EDuration="1.355104158s" podCreationTimestamp="2025-05-17 00:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:44.231609397 +0000 UTC m=+1.449363510" watchObservedRunningTime="2025-05-17 00:34:44.355104158 +0000 UTC m=+1.572858271" May 17 00:34:44.914878 kubelet[2108]: E0517 00:34:44.914825 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:46.554113 kubelet[2108]: I0517 00:34:46.554060 2108 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:34:46.554580 env[1307]: time="2025-05-17T00:34:46.554538566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:34:46.554846 kubelet[2108]: I0517 00:34:46.554731 2108 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:34:47.481148 kubelet[2108]: I0517 00:34:47.481086 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/529c1c01-9271-47a7-9ece-a6454631ea85-lib-modules\") pod \"kube-proxy-w77lh\" (UID: \"529c1c01-9271-47a7-9ece-a6454631ea85\") " pod="kube-system/kube-proxy-w77lh" May 17 00:34:47.481148 kubelet[2108]: I0517 00:34:47.481151 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2ss\" (UniqueName: \"kubernetes.io/projected/529c1c01-9271-47a7-9ece-a6454631ea85-kube-api-access-jd2ss\") pod \"kube-proxy-w77lh\" (UID: \"529c1c01-9271-47a7-9ece-a6454631ea85\") " pod="kube-system/kube-proxy-w77lh" May 17 00:34:47.481348 kubelet[2108]: I0517 00:34:47.481192 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/529c1c01-9271-47a7-9ece-a6454631ea85-xtables-lock\") pod \"kube-proxy-w77lh\" (UID: \"529c1c01-9271-47a7-9ece-a6454631ea85\") " pod="kube-system/kube-proxy-w77lh" May 17 00:34:47.481348 kubelet[2108]: I0517 00:34:47.481213 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/529c1c01-9271-47a7-9ece-a6454631ea85-kube-proxy\") pod \"kube-proxy-w77lh\" (UID: \"529c1c01-9271-47a7-9ece-a6454631ea85\") " pod="kube-system/kube-proxy-w77lh" May 17 00:34:47.615291 kubelet[2108]: E0517 00:34:47.615253 2108 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:34:47.615291 kubelet[2108]: E0517 00:34:47.615293 2108 projected.go:194] Error preparing data for projected volume kube-api-access-jd2ss for pod kube-system/kube-proxy-w77lh: configmap "kube-root-ca.crt" not found May 17 00:34:47.615779 kubelet[2108]: E0517 00:34:47.615377 2108 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/529c1c01-9271-47a7-9ece-a6454631ea85-kube-api-access-jd2ss podName:529c1c01-9271-47a7-9ece-a6454631ea85 nodeName:}" failed. No retries permitted until 2025-05-17 00:34:48.115347121 +0000 UTC m=+5.333101224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jd2ss" (UniqueName: "kubernetes.io/projected/529c1c01-9271-47a7-9ece-a6454631ea85-kube-api-access-jd2ss") pod "kube-proxy-w77lh" (UID: "529c1c01-9271-47a7-9ece-a6454631ea85") : configmap "kube-root-ca.crt" not found May 17 00:34:47.682573 kubelet[2108]: I0517 00:34:47.682527 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/908854c7-e91d-46ea-aa26-f5e6d5a3e929-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-45fxb\" (UID: \"908854c7-e91d-46ea-aa26-f5e6d5a3e929\") " pod="tigera-operator/tigera-operator-7c5755cdcb-45fxb" May 17 00:34:47.682573 kubelet[2108]: I0517 00:34:47.682572 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x587\" (UniqueName: \"kubernetes.io/projected/908854c7-e91d-46ea-aa26-f5e6d5a3e929-kube-api-access-4x587\") pod \"tigera-operator-7c5755cdcb-45fxb\" (UID: \"908854c7-e91d-46ea-aa26-f5e6d5a3e929\") " pod="tigera-operator/tigera-operator-7c5755cdcb-45fxb" May 17 00:34:47.788578 kubelet[2108]: I0517 00:34:47.788481 2108 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:34:47.982395 env[1307]: time="2025-05-17T00:34:47.982329600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-45fxb,Uid:908854c7-e91d-46ea-aa26-f5e6d5a3e929,Namespace:tigera-operator,Attempt:0,}" May 17 00:34:47.999459 env[1307]: time="2025-05-17T00:34:47.999386265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:47.999459 env[1307]: time="2025-05-17T00:34:47.999418997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:47.999459 env[1307]: time="2025-05-17T00:34:47.999428545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:47.999657 env[1307]: time="2025-05-17T00:34:47.999536280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bd4eac14d6f6793e9837c6a09625cd51083c7482589fb849e00df8087cb362f pid=2163 runtime=io.containerd.runc.v2 May 17 00:34:48.039798 env[1307]: time="2025-05-17T00:34:48.039683154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-45fxb,Uid:908854c7-e91d-46ea-aa26-f5e6d5a3e929,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1bd4eac14d6f6793e9837c6a09625cd51083c7482589fb849e00df8087cb362f\"" May 17 00:34:48.041425 env[1307]: time="2025-05-17T00:34:48.041392734Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:34:48.302106 kubelet[2108]: E0517 00:34:48.302052 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:48.370823 kubelet[2108]: E0517 00:34:48.370786 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:48.371364 env[1307]: time="2025-05-17T00:34:48.371327430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w77lh,Uid:529c1c01-9271-47a7-9ece-a6454631ea85,Namespace:kube-system,Attempt:0,}" May 17 00:34:48.639732 env[1307]: time="2025-05-17T00:34:48.639579464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:48.639732 env[1307]: time="2025-05-17T00:34:48.639626463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:48.639732 env[1307]: time="2025-05-17T00:34:48.639637374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:48.640013 env[1307]: time="2025-05-17T00:34:48.639776368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6dbce01f0f413f08216096d9289f8c63f01cc1d0eb264e6185c58aee0f30546 pid=2203 runtime=io.containerd.runc.v2 May 17 00:34:48.677226 env[1307]: time="2025-05-17T00:34:48.677174305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w77lh,Uid:529c1c01-9271-47a7-9ece-a6454631ea85,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6dbce01f0f413f08216096d9289f8c63f01cc1d0eb264e6185c58aee0f30546\"" May 17 00:34:48.677961 kubelet[2108]: E0517 00:34:48.677936 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:48.680679 env[1307]: time="2025-05-17T00:34:48.679877995Z" level=info msg="CreateContainer within sandbox \"b6dbce01f0f413f08216096d9289f8c63f01cc1d0eb264e6185c58aee0f30546\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:34:48.695050 env[1307]: time="2025-05-17T00:34:48.695007670Z" level=info msg="CreateContainer within sandbox \"b6dbce01f0f413f08216096d9289f8c63f01cc1d0eb264e6185c58aee0f30546\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e2675f6683597e527153ffb74d8d0487ae89aca33a867f78001d6b8fe02b639\"" May 17 00:34:48.695770 env[1307]: time="2025-05-17T00:34:48.695641524Z" level=info msg="StartContainer for \"0e2675f6683597e527153ffb74d8d0487ae89aca33a867f78001d6b8fe02b639\"" May 17 00:34:48.779797 env[1307]: time="2025-05-17T00:34:48.779733638Z" level=info msg="StartContainer for \"0e2675f6683597e527153ffb74d8d0487ae89aca33a867f78001d6b8fe02b639\" returns successfully" May 17 00:34:48.897000 audit[2304]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:48.899900 kernel: kauditd_printk_skb: 4 callbacks suppressed May 17 00:34:48.900031 kernel: audit: type=1325 audit(1747442088.897:232): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:48.897000 audit[2304]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc002c80c0 a2=0 a3=7ffc002c80ac items=0 ppid=2253 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.907445 kernel: audit: type=1300 audit(1747442088.897:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc002c80c0 a2=0 a3=7ffc002c80ac items=0 ppid=2253 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.907522 kernel: audit: type=1327 audit(1747442088.897:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:34:48.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:34:48.910097 kernel: audit: type=1325 audit(1747442088.898:233): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:48.898000 audit[2305]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:48.898000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff45a96940 a2=0 a3=92b989d134c1f352 items=0 ppid=2253 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.917256 kernel: audit: type=1300 audit(1747442088.898:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff45a96940 a2=0 a3=92b989d134c1f352 items=0 ppid=2253 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:34:48.919516 kernel: audit: type=1327 audit(1747442088.898:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:34:48.919552 kernel: audit: type=1325 audit(1747442088.898:234): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:48.898000 audit[2306]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:48.898000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd12a38820 a2=0 a3=7ffd12a3880c items=0 ppid=2253 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.923043 kubelet[2108]: E0517 00:34:48.923021 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:48.924204 kubelet[2108]: E0517 00:34:48.924188 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:48.926342 kernel: audit: type=1300 audit(1747442088.898:234): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd12a38820 a2=0 a3=7ffd12a3880c items=0 ppid=2253 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.898000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:34:48.928659 kernel: audit: type=1327 audit(1747442088.898:234): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:34:48.899000 audit[2307]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:48.931090 kernel: audit: type=1325 audit(1747442088.899:235): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:48.899000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe248d7f20 a2=0 a3=7ffe248d7f0c items=0 ppid=2253 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:34:48.900000 audit[2308]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:48.900000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff81438f40 a2=0 a3=7fff81438f2c items=0 ppid=2253 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:34:48.901000 audit[2309]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:48.901000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5cfe2570 a2=0 a3=7ffe5cfe255c items=0 ppid=2253 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:48.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:34:48.932639 kubelet[2108]: I0517 00:34:48.932533 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w77lh" podStartSLOduration=1.9325184069999999 podStartE2EDuration="1.932518407s" podCreationTimestamp="2025-05-17 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:34:48.932181567 +0000 UTC m=+6.149935680" watchObservedRunningTime="2025-05-17 00:34:48.932518407 +0000 UTC m=+6.150272520" May 17 00:34:49.001000 audit[2310]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.001000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe3de9360 a2=0 a3=7fffe3de934c items=0 ppid=2253 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:34:49.004000 audit[2312]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.004000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff939e2410 a2=0 a3=7fff939e23fc items=0 ppid=2253 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 00:34:49.007000 audit[2315]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.007000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff26b9fe90 a2=0 a3=7fff26b9fe7c items=0 ppid=2253 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 00:34:49.008000 audit[2316]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.008000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0b5544a0 a2=0 a3=7ffd0b55448c items=0 ppid=2253 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:34:49.010000 audit[2318]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.010000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe74d8efa0 a2=0 a3=7ffe74d8ef8c items=0 ppid=2253 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:34:49.011000 audit[2319]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.011000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3b686720 a2=0 a3=7ffd3b68670c items=0 ppid=2253 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:34:49.013000 audit[2321]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.013000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff814c6d0 a2=0 a3=7ffff814c6bc items=0 ppid=2253 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:34:49.016000 audit[2324]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.016000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdd70918c0 a2=0 a3=7ffdd70918ac items=0 ppid=2253 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 00:34:49.017000 audit[2325]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.017000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd32bfa80 a2=0 a3=7ffcd32bfa6c items=0 ppid=2253 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:34:49.019000 audit[2327]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.019000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffd44a350 a2=0 a3=7ffffd44a33c items=0 ppid=2253 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:34:49.020000 audit[2328]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.020000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca8147e90 a2=0 a3=7ffca8147e7c items=0 ppid=2253 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:34:49.022000 audit[2330]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.022000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1603ee20 a2=0 a3=7fff1603ee0c items=0 ppid=2253 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:34:49.025000 audit[2333]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.025000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2293f870 a2=0 a3=7ffe2293f85c items=0 ppid=2253 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:34:49.029000 audit[2336]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.029000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda0ac1050 a2=0 a3=7ffda0ac103c items=0 ppid=2253 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.029000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:34:49.030000 audit[2337]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.030000 audit[2337]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5493d590 a2=0 a3=7ffe5493d57c items=0 ppid=2253 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:34:49.032000 audit[2339]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.032000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc14a79dd0 a2=0 a3=7ffc14a79dbc items=0 ppid=2253 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:34:49.035000 audit[2342]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.035000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff546f0b90 a2=0 a3=7fff546f0b7c items=0 ppid=2253 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:34:49.035000 audit[2343]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.035000 audit[2343]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7dabce90 a2=0 a3=7ffd7dabce7c items=0 ppid=2253 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:34:49.037000 audit[2345]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:34:49.037000 audit[2345]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc7bd4f9c0 a2=0 a3=7ffc7bd4f9ac items=0 ppid=2253 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:34:49.068000 audit[2351]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:49.068000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd02af50e0 a2=0 a3=7ffd02af50cc items=0 ppid=2253 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:49.076000 audit[2351]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:49.076000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd02af50e0 a2=0 a3=7ffd02af50cc items=0 ppid=2253 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:49.078000 audit[2356]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.078000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc1e57c6d0 a2=0 a3=7ffc1e57c6bc items=0 ppid=2253 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.078000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:34:49.081000 audit[2358]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.081000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc034112b0 a2=0 a3=7ffc0341129c items=0 ppid=2253 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 00:34:49.085000 audit[2361]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.085000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffcbcff2f0 a2=0 a3=7fffcbcff2dc items=0 ppid=2253 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 00:34:49.086000 audit[2362]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.086000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdb61e780 a2=0 a3=7fffdb61e76c items=0 ppid=2253 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:34:49.089000 audit[2364]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.089000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd33085ea0 a2=0 a3=7ffd33085e8c items=0 ppid=2253 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.089000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:34:49.090000 audit[2365]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.090000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe2f4a810 a2=0 a3=7fffe2f4a7fc items=0 ppid=2253 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:34:49.092000 audit[2367]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.092000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff102a9e50 a2=0 a3=7fff102a9e3c items=0 ppid=2253 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.092000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 00:34:49.096000 audit[2370]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.096000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffddd58b360 a2=0 a3=7ffddd58b34c items=0 ppid=2253 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:34:49.097000 audit[2371]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.097000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2433cdb0 a2=0 a3=7fff2433cd9c items=0 ppid=2253 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:34:49.101000 audit[2373]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.101000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc76acd40 a2=0 a3=7fffc76acd2c items=0 ppid=2253 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:34:49.102000 audit[2374]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.102000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2986e300 a2=0 a3=7ffe2986e2ec items=0 ppid=2253 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:34:49.105000 audit[2376]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.105000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe51305080 a2=0 a3=7ffe5130506c items=0 ppid=2253 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:34:49.109000 audit[2379]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.109000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc39459cb0 a2=0 a3=7ffc39459c9c items=0 ppid=2253 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:34:49.112000 audit[2382]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.112000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe45a103c0 a2=0 a3=7ffe45a103ac items=0 ppid=2253 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 00:34:49.113000 audit[2383]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.113000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce527aad0 a2=0 a3=7ffce527aabc items=0 ppid=2253 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:34:49.115000 audit[2385]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.115000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe9eecb080 a2=0 a3=7ffe9eecb06c items=0 ppid=2253 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.115000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:34:49.119000 audit[2388]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.119000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd7bd170d0 a2=0 a3=7ffd7bd170bc items=0 ppid=2253 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.119000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:34:49.120000 audit[2389]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.120000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7c919ab0 a2=0 a3=7ffc7c919a9c items=0 ppid=2253 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:34:49.123000 audit[2391]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.123000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff2648cfa0 a2=0 a3=7fff2648cf8c items=0 ppid=2253 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:34:49.123000 audit[2392]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.123000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefb744930 a2=0 a3=7ffefb74491c items=0 ppid=2253 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:34:49.125000 audit[2394]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.125000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd9bc9ac90 a2=0 a3=7ffd9bc9ac7c items=0 ppid=2253 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.125000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:34:49.128000 audit[2397]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:34:49.128000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd70776010 a2=0 a3=7ffd70775ffc items=0 ppid=2253 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:34:49.131000 audit[2399]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:34:49.131000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeea6cd980 a2=0 a3=7ffeea6cd96c items=0 ppid=2253 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.131000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:49.132000 audit[2399]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:34:49.132000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeea6cd980 a2=0 a3=7ffeea6cd96c items=0 ppid=2253 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:49.132000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:49.488246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810263959.mount: Deactivated successfully. May 17 00:34:50.353795 env[1307]: time="2025-05-17T00:34:50.353723806Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.355836 env[1307]: time="2025-05-17T00:34:50.355798924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.357455 env[1307]: time="2025-05-17T00:34:50.357414160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.358956 env[1307]: time="2025-05-17T00:34:50.358912101Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:34:50.359538 env[1307]: time="2025-05-17T00:34:50.359496992Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:34:50.361838 env[1307]: time="2025-05-17T00:34:50.361800523Z" level=info msg="CreateContainer within sandbox \"1bd4eac14d6f6793e9837c6a09625cd51083c7482589fb849e00df8087cb362f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:34:50.377671 env[1307]: time="2025-05-17T00:34:50.377599684Z" level=info msg="CreateContainer within sandbox \"1bd4eac14d6f6793e9837c6a09625cd51083c7482589fb849e00df8087cb362f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7154593c8b1238dab1ef189faa6b44fe7e0b960d2fc9d827beebc81710a667b8\"" May 17 00:34:50.378399 env[1307]: time="2025-05-17T00:34:50.378351281Z" level=info msg="StartContainer for \"7154593c8b1238dab1ef189faa6b44fe7e0b960d2fc9d827beebc81710a667b8\"" May 17 00:34:50.782241 env[1307]: time="2025-05-17T00:34:50.782105636Z" level=info msg="StartContainer for \"7154593c8b1238dab1ef189faa6b44fe7e0b960d2fc9d827beebc81710a667b8\" returns successfully" May 17 00:34:50.934596 kubelet[2108]: I0517 00:34:50.934532 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-45fxb" podStartSLOduration=1.6148888989999999 podStartE2EDuration="3.934512872s" podCreationTimestamp="2025-05-17 00:34:47 +0000 UTC" firstStartedPulling="2025-05-17 00:34:48.040943871 +0000 UTC m=+5.258697994" lastFinishedPulling="2025-05-17 00:34:50.360567854 +0000 UTC m=+7.578321967" observedRunningTime="2025-05-17 00:34:50.93426417 +0000 UTC m=+8.152018283" watchObservedRunningTime="2025-05-17 00:34:50.934512872 +0000 UTC m=+8.152266985" May 17 00:34:51.127154 kubelet[2108]: E0517 00:34:51.127120 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:51.928845 kubelet[2108]: E0517 00:34:51.928807 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:52.701843 kubelet[2108]: E0517 00:34:52.701792 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:52.905216 update_engine[1295]: I0517 00:34:52.905158 1295 update_attempter.cc:509] Updating boot flags... May 17 00:34:52.938101 kubelet[2108]: E0517 00:34:52.936578 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:55.619679 sudo[1471]: pam_unix(sudo:session): session closed for user root May 17 00:34:55.618000 audit[1471]: USER_END pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:55.620987 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 00:34:55.621039 kernel: audit: type=1106 audit(1747442095.618:283): pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:55.618000 audit[1471]: CRED_DISP pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:55.629093 kernel: audit: type=1104 audit(1747442095.618:284): pid=1471 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:34:55.629324 sshd[1466]: pam_unix(sshd:session): session closed for user core May 17 00:34:55.629000 audit[1466]: USER_END pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:55.629000 audit[1466]: CRED_DISP pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:55.636400 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:41160.service: Deactivated successfully. May 17 00:34:55.637233 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:34:55.639663 kernel: audit: type=1106 audit(1747442095.629:285): pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:55.639716 kernel: audit: type=1104 audit(1747442095.629:286): pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:34:55.640132 systemd-logind[1292]: Session 7 logged out. Waiting for processes to exit. May 17 00:34:55.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:41160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:55.641316 systemd-logind[1292]: Removed session 7. May 17 00:34:55.646102 kernel: audit: type=1131 audit(1747442095.635:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:41160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:34:55.926000 audit[2505]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:55.926000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd17630080 a2=0 a3=7ffd1763006c items=0 ppid=2253 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:55.936840 kernel: audit: type=1325 audit(1747442095.926:288): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:55.936996 kernel: audit: type=1300 audit(1747442095.926:288): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd17630080 a2=0 a3=7ffd1763006c items=0 ppid=2253 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:55.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:55.940086 kernel: audit: type=1327 audit(1747442095.926:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:55.940000 audit[2505]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:55.940000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd17630080 a2=0 a3=0 items=0 ppid=2253 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:55.949639 kernel: audit: type=1325 audit(1747442095.940:289): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:55.949692 kernel: audit: type=1300 audit(1747442095.940:289): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd17630080 a2=0 a3=0 items=0 ppid=2253 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:55.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:55.963000 audit[2507]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:55.963000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc5faf1030 a2=0 a3=7ffc5faf101c items=0 ppid=2253 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:55.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:55.967000 audit[2507]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:55.967000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5faf1030 a2=0 a3=0 items=0 ppid=2253 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:55.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:57.735000 audit[2509]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:57.735000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffc85e87c0 a2=0 a3=7fffc85e87ac items=0 ppid=2253 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:57.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:57.740000 audit[2509]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:57.740000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc85e87c0 a2=0 a3=0 items=0 ppid=2253 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:57.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:57.943900 kubelet[2108]: I0517 00:34:57.943862 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4fs\" (UniqueName: \"kubernetes.io/projected/cc6db870-1d11-418b-b3fc-d17bba0f7339-kube-api-access-sd4fs\") pod \"calico-typha-6b7d75bf5d-48jf7\" (UID: \"cc6db870-1d11-418b-b3fc-d17bba0f7339\") " pod="calico-system/calico-typha-6b7d75bf5d-48jf7" May 17 00:34:57.943900 kubelet[2108]: I0517 00:34:57.943899 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6db870-1d11-418b-b3fc-d17bba0f7339-tigera-ca-bundle\") pod \"calico-typha-6b7d75bf5d-48jf7\" (UID: \"cc6db870-1d11-418b-b3fc-d17bba0f7339\") " pod="calico-system/calico-typha-6b7d75bf5d-48jf7" May 17 00:34:57.944452 kubelet[2108]: I0517 00:34:57.943989 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc6db870-1d11-418b-b3fc-d17bba0f7339-typha-certs\") pod \"calico-typha-6b7d75bf5d-48jf7\" (UID: \"cc6db870-1d11-418b-b3fc-d17bba0f7339\") " pod="calico-system/calico-typha-6b7d75bf5d-48jf7" May 17 00:34:58.065710 kubelet[2108]: E0517 00:34:58.065677 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:58.066480 env[1307]: time="2025-05-17T00:34:58.066425993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b7d75bf5d-48jf7,Uid:cc6db870-1d11-418b-b3fc-d17bba0f7339,Namespace:calico-system,Attempt:0,}" May 17 00:34:58.089672 env[1307]: time="2025-05-17T00:34:58.089557512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:58.089826 env[1307]: time="2025-05-17T00:34:58.089670204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:58.089826 env[1307]: time="2025-05-17T00:34:58.089741549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:58.090186 env[1307]: time="2025-05-17T00:34:58.090131025Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/480fc4f8a08622c15a44d9ec425fb39ba0a1fc6935a9dbd52de5d43e5ab68983 pid=2518 runtime=io.containerd.runc.v2 May 17 00:34:58.151882 env[1307]: time="2025-05-17T00:34:58.151820823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b7d75bf5d-48jf7,Uid:cc6db870-1d11-418b-b3fc-d17bba0f7339,Namespace:calico-system,Attempt:0,} returns sandbox id \"480fc4f8a08622c15a44d9ec425fb39ba0a1fc6935a9dbd52de5d43e5ab68983\"" May 17 00:34:58.161595 kubelet[2108]: E0517 00:34:58.154280 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:34:58.161790 env[1307]: time="2025-05-17T00:34:58.156445199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:34:58.346753 kubelet[2108]: I0517 00:34:58.346592 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-xtables-lock\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.346753 kubelet[2108]: I0517 00:34:58.346636 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-flexvol-driver-host\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.346753 kubelet[2108]: I0517 00:34:58.346658 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwctv\" (UniqueName: \"kubernetes.io/projected/a1c2564c-13db-41d1-ac14-6d718289107e-kube-api-access-fwctv\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.346753 kubelet[2108]: I0517 00:34:58.346670 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-cni-net-dir\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.346753 kubelet[2108]: I0517 00:34:58.346687 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-cni-log-dir\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347052 kubelet[2108]: I0517 00:34:58.346699 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-lib-modules\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347052 kubelet[2108]: I0517 00:34:58.346716 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a1c2564c-13db-41d1-ac14-6d718289107e-node-certs\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347052 kubelet[2108]: I0517 00:34:58.346740 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-policysync\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347052 kubelet[2108]: I0517 00:34:58.346763 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1c2564c-13db-41d1-ac14-6d718289107e-tigera-ca-bundle\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347052 kubelet[2108]: I0517 00:34:58.346781 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-var-lib-calico\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347322 kubelet[2108]: I0517 00:34:58.346841 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-cni-bin-dir\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.347322 kubelet[2108]: I0517 00:34:58.346872 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a1c2564c-13db-41d1-ac14-6d718289107e-var-run-calico\") pod \"calico-node-z7g47\" (UID: \"a1c2564c-13db-41d1-ac14-6d718289107e\") " pod="calico-system/calico-node-z7g47" May 17 00:34:58.451137 kubelet[2108]: E0517 00:34:58.451101 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.451137 kubelet[2108]: W0517 00:34:58.451125 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.451327 kubelet[2108]: E0517 00:34:58.451154 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.473385 kubelet[2108]: E0517 00:34:58.473330 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.473385 kubelet[2108]: W0517 00:34:58.473360 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.473385 kubelet[2108]: E0517 00:34:58.473387 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.594323 kubelet[2108]: E0517 00:34:58.594257 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:34:58.649983 kubelet[2108]: E0517 00:34:58.649862 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.649983 kubelet[2108]: W0517 00:34:58.649895 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.649983 kubelet[2108]: E0517 00:34:58.649926 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.650254 kubelet[2108]: E0517 00:34:58.650228 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.650254 kubelet[2108]: W0517 00:34:58.650242 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.650254 kubelet[2108]: E0517 00:34:58.650253 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.650529 kubelet[2108]: E0517 00:34:58.650501 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.650529 kubelet[2108]: W0517 00:34:58.650520 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.650625 kubelet[2108]: E0517 00:34:58.650532 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.650769 kubelet[2108]: E0517 00:34:58.650753 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.650769 kubelet[2108]: W0517 00:34:58.650766 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.650861 kubelet[2108]: E0517 00:34:58.650779 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.650986 kubelet[2108]: E0517 00:34:58.650970 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.650986 kubelet[2108]: W0517 00:34:58.650982 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.651094 kubelet[2108]: E0517 00:34:58.650993 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.651193 kubelet[2108]: E0517 00:34:58.651178 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.651231 kubelet[2108]: W0517 00:34:58.651192 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.651231 kubelet[2108]: E0517 00:34:58.651202 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.651382 kubelet[2108]: E0517 00:34:58.651365 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.651382 kubelet[2108]: W0517 00:34:58.651378 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.651454 kubelet[2108]: E0517 00:34:58.651388 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.651577 kubelet[2108]: E0517 00:34:58.651549 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.651577 kubelet[2108]: W0517 00:34:58.651573 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.651650 kubelet[2108]: E0517 00:34:58.651583 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.651791 kubelet[2108]: E0517 00:34:58.651767 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.651791 kubelet[2108]: W0517 00:34:58.651781 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.651844 kubelet[2108]: E0517 00:34:58.651791 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.651973 kubelet[2108]: E0517 00:34:58.651956 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.651973 kubelet[2108]: W0517 00:34:58.651969 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.652044 kubelet[2108]: E0517 00:34:58.651979 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.652188 kubelet[2108]: E0517 00:34:58.652170 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.652188 kubelet[2108]: W0517 00:34:58.652186 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.652255 kubelet[2108]: E0517 00:34:58.652197 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.652408 kubelet[2108]: E0517 00:34:58.652389 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.652408 kubelet[2108]: W0517 00:34:58.652408 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.652485 kubelet[2108]: E0517 00:34:58.652422 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.652686 kubelet[2108]: E0517 00:34:58.652658 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.652686 kubelet[2108]: W0517 00:34:58.652681 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.652756 kubelet[2108]: E0517 00:34:58.652693 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.652907 kubelet[2108]: E0517 00:34:58.652889 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.652907 kubelet[2108]: W0517 00:34:58.652905 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.652982 kubelet[2108]: E0517 00:34:58.652918 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.653179 kubelet[2108]: E0517 00:34:58.653159 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.653179 kubelet[2108]: W0517 00:34:58.653175 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.653252 kubelet[2108]: E0517 00:34:58.653189 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.653389 kubelet[2108]: E0517 00:34:58.653371 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.653389 kubelet[2108]: W0517 00:34:58.653386 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.653465 kubelet[2108]: E0517 00:34:58.653398 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.653605 kubelet[2108]: E0517 00:34:58.653588 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.653605 kubelet[2108]: W0517 00:34:58.653602 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.653674 kubelet[2108]: E0517 00:34:58.653615 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.653838 kubelet[2108]: E0517 00:34:58.653822 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.653869 kubelet[2108]: W0517 00:34:58.653837 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.653869 kubelet[2108]: E0517 00:34:58.653851 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.654096 kubelet[2108]: E0517 00:34:58.654051 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.654096 kubelet[2108]: W0517 00:34:58.654091 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.654204 kubelet[2108]: E0517 00:34:58.654106 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.654318 kubelet[2108]: E0517 00:34:58.654296 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.654318 kubelet[2108]: W0517 00:34:58.654310 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.654392 kubelet[2108]: E0517 00:34:58.654322 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.750571 kubelet[2108]: E0517 00:34:58.750530 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.750571 kubelet[2108]: W0517 00:34:58.750555 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.750571 kubelet[2108]: E0517 00:34:58.750579 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.750790 kubelet[2108]: I0517 00:34:58.750606 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a10bef1-407b-40ca-9b52-a14544f402bf-socket-dir\") pod \"csi-node-driver-gb94f\" (UID: \"7a10bef1-407b-40ca-9b52-a14544f402bf\") " pod="calico-system/csi-node-driver-gb94f" May 17 00:34:58.750903 kubelet[2108]: E0517 00:34:58.750869 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.750964 kubelet[2108]: W0517 00:34:58.750904 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.750964 kubelet[2108]: E0517 00:34:58.750946 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.751013 kubelet[2108]: I0517 00:34:58.750993 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a10bef1-407b-40ca-9b52-a14544f402bf-registration-dir\") pod \"csi-node-driver-gb94f\" (UID: \"7a10bef1-407b-40ca-9b52-a14544f402bf\") " pod="calico-system/csi-node-driver-gb94f" May 17 00:34:58.751300 kubelet[2108]: E0517 00:34:58.751276 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.751300 kubelet[2108]: W0517 00:34:58.751295 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.751404 kubelet[2108]: E0517 00:34:58.751312 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.751404 kubelet[2108]: I0517 00:34:58.751331 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7a10bef1-407b-40ca-9b52-a14544f402bf-varrun\") pod \"csi-node-driver-gb94f\" (UID: \"7a10bef1-407b-40ca-9b52-a14544f402bf\") " pod="calico-system/csi-node-driver-gb94f" May 17 00:34:58.751564 kubelet[2108]: E0517 00:34:58.751538 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.751564 kubelet[2108]: W0517 00:34:58.751549 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.751564 kubelet[2108]: E0517 00:34:58.751561 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.751776 kubelet[2108]: I0517 00:34:58.751580 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a10bef1-407b-40ca-9b52-a14544f402bf-kubelet-dir\") pod \"csi-node-driver-gb94f\" (UID: \"7a10bef1-407b-40ca-9b52-a14544f402bf\") " pod="calico-system/csi-node-driver-gb94f" May 17 00:34:58.751776 kubelet[2108]: E0517 00:34:58.751752 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.751776 kubelet[2108]: W0517 00:34:58.751760 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.751776 kubelet[2108]: E0517 00:34:58.751773 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.751877 kubelet[2108]: I0517 00:34:58.751786 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjb7b\" (UniqueName: \"kubernetes.io/projected/7a10bef1-407b-40ca-9b52-a14544f402bf-kube-api-access-cjb7b\") pod \"csi-node-driver-gb94f\" (UID: \"7a10bef1-407b-40ca-9b52-a14544f402bf\") " pod="calico-system/csi-node-driver-gb94f" May 17 00:34:58.752024 kubelet[2108]: E0517 00:34:58.752006 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.752024 kubelet[2108]: W0517 00:34:58.752022 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.752142 kubelet[2108]: E0517 00:34:58.752040 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.752293 kubelet[2108]: E0517 00:34:58.752278 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.752293 kubelet[2108]: W0517 00:34:58.752292 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.752377 kubelet[2108]: E0517 00:34:58.752334 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.752504 kubelet[2108]: E0517 00:34:58.752474 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.752504 kubelet[2108]: W0517 00:34:58.752498 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.752628 kubelet[2108]: E0517 00:34:58.752532 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.752628 kubelet[2108]: E0517 00:34:58.752625 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.752628 kubelet[2108]: W0517 00:34:58.752632 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.752707 kubelet[2108]: E0517 00:34:58.752679 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.752783 kubelet[2108]: E0517 00:34:58.752768 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.752783 kubelet[2108]: W0517 00:34:58.752777 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.752870 kubelet[2108]: E0517 00:34:58.752789 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.752960 kubelet[2108]: E0517 00:34:58.752940 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.752960 kubelet[2108]: W0517 00:34:58.752958 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.753054 kubelet[2108]: E0517 00:34:58.752976 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.753208 kubelet[2108]: E0517 00:34:58.753181 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.753208 kubelet[2108]: W0517 00:34:58.753197 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.753208 kubelet[2108]: E0517 00:34:58.753209 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.753393 kubelet[2108]: E0517 00:34:58.753376 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.753393 kubelet[2108]: W0517 00:34:58.753390 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.753495 kubelet[2108]: E0517 00:34:58.753399 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.753587 kubelet[2108]: E0517 00:34:58.753573 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.753587 kubelet[2108]: W0517 00:34:58.753584 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.753659 kubelet[2108]: E0517 00:34:58.753591 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.753775 kubelet[2108]: E0517 00:34:58.753758 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.753848 kubelet[2108]: W0517 00:34:58.753769 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.753848 kubelet[2108]: E0517 00:34:58.753789 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.755000 audit[2604]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:58.755000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffec6f1d4b0 a2=0 a3=7ffec6f1d49c items=0 ppid=2253 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:58.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:58.761000 audit[2604]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:34:58.761000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec6f1d4b0 a2=0 a3=0 items=0 ppid=2253 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:34:58.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:34:58.765607 env[1307]: time="2025-05-17T00:34:58.765569962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7g47,Uid:a1c2564c-13db-41d1-ac14-6d718289107e,Namespace:calico-system,Attempt:0,}" May 17 00:34:58.781806 env[1307]: time="2025-05-17T00:34:58.781721584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:34:58.781806 env[1307]: time="2025-05-17T00:34:58.781773943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:34:58.781806 env[1307]: time="2025-05-17T00:34:58.781785394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:34:58.784655 env[1307]: time="2025-05-17T00:34:58.783043230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939 pid=2612 runtime=io.containerd.runc.v2 May 17 00:34:58.818020 env[1307]: time="2025-05-17T00:34:58.817952988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7g47,Uid:a1c2564c-13db-41d1-ac14-6d718289107e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\"" May 17 00:34:58.852532 kubelet[2108]: E0517 00:34:58.852498 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.852532 kubelet[2108]: W0517 00:34:58.852515 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.852532 kubelet[2108]: E0517 00:34:58.852537 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.852720 kubelet[2108]: E0517 00:34:58.852709 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.852762 kubelet[2108]: W0517 00:34:58.852718 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.852762 kubelet[2108]: E0517 00:34:58.852731 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.852997 kubelet[2108]: E0517 00:34:58.852962 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.853026 kubelet[2108]: W0517 00:34:58.852993 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.853049 kubelet[2108]: E0517 00:34:58.853029 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.853271 kubelet[2108]: E0517 00:34:58.853255 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.853271 kubelet[2108]: W0517 00:34:58.853267 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.853362 kubelet[2108]: E0517 00:34:58.853280 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.853447 kubelet[2108]: E0517 00:34:58.853439 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.853470 kubelet[2108]: W0517 00:34:58.853447 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.853470 kubelet[2108]: E0517 00:34:58.853458 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.853602 kubelet[2108]: E0517 00:34:58.853594 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.853630 kubelet[2108]: W0517 00:34:58.853602 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.853630 kubelet[2108]: E0517 00:34:58.853616 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.853812 kubelet[2108]: E0517 00:34:58.853804 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.853838 kubelet[2108]: W0517 00:34:58.853812 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.853838 kubelet[2108]: E0517 00:34:58.853824 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.853978 kubelet[2108]: E0517 00:34:58.853970 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.854003 kubelet[2108]: W0517 00:34:58.853978 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.854003 kubelet[2108]: E0517 00:34:58.853990 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.854153 kubelet[2108]: E0517 00:34:58.854142 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.854153 kubelet[2108]: W0517 00:34:58.854151 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.854218 kubelet[2108]: E0517 00:34:58.854162 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.854333 kubelet[2108]: E0517 00:34:58.854322 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.854368 kubelet[2108]: W0517 00:34:58.854331 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.854368 kubelet[2108]: E0517 00:34:58.854358 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.854495 kubelet[2108]: E0517 00:34:58.854482 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.854495 kubelet[2108]: W0517 00:34:58.854493 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.854546 kubelet[2108]: E0517 00:34:58.854522 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.854686 kubelet[2108]: E0517 00:34:58.854673 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.854686 kubelet[2108]: W0517 00:34:58.854683 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.854734 kubelet[2108]: E0517 00:34:58.854708 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.854889 kubelet[2108]: E0517 00:34:58.854874 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.854889 kubelet[2108]: W0517 00:34:58.854884 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.854957 kubelet[2108]: E0517 00:34:58.854900 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.855104 kubelet[2108]: E0517 00:34:58.855078 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.855104 kubelet[2108]: W0517 00:34:58.855099 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.855179 kubelet[2108]: E0517 00:34:58.855114 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.855319 kubelet[2108]: E0517 00:34:58.855284 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.855319 kubelet[2108]: W0517 00:34:58.855296 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.855319 kubelet[2108]: E0517 00:34:58.855315 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.855521 kubelet[2108]: E0517 00:34:58.855476 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.855521 kubelet[2108]: W0517 00:34:58.855483 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.855521 kubelet[2108]: E0517 00:34:58.855506 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.855669 kubelet[2108]: E0517 00:34:58.855653 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.855669 kubelet[2108]: W0517 00:34:58.855663 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.855769 kubelet[2108]: E0517 00:34:58.855716 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.855795 kubelet[2108]: E0517 00:34:58.855789 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.855822 kubelet[2108]: W0517 00:34:58.855799 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.856034 kubelet[2108]: E0517 00:34:58.855876 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.856034 kubelet[2108]: E0517 00:34:58.855946 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.856034 kubelet[2108]: W0517 00:34:58.855952 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.856034 kubelet[2108]: E0517 00:34:58.855964 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.856179 kubelet[2108]: E0517 00:34:58.856119 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.856179 kubelet[2108]: W0517 00:34:58.856127 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.856179 kubelet[2108]: E0517 00:34:58.856138 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.856307 kubelet[2108]: E0517 00:34:58.856292 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.856307 kubelet[2108]: W0517 00:34:58.856304 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.856372 kubelet[2108]: E0517 00:34:58.856318 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.856466 kubelet[2108]: E0517 00:34:58.856456 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.856466 kubelet[2108]: W0517 00:34:58.856464 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.856520 kubelet[2108]: E0517 00:34:58.856475 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.856648 kubelet[2108]: E0517 00:34:58.856638 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.856648 kubelet[2108]: W0517 00:34:58.856647 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.856717 kubelet[2108]: E0517 00:34:58.856659 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.857020 kubelet[2108]: E0517 00:34:58.856905 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.857020 kubelet[2108]: W0517 00:34:58.856916 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.857020 kubelet[2108]: E0517 00:34:58.856927 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.857163 kubelet[2108]: E0517 00:34:58.857149 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.857203 kubelet[2108]: W0517 00:34:58.857190 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.857230 kubelet[2108]: E0517 00:34:58.857208 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:58.863622 kubelet[2108]: E0517 00:34:58.863595 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:34:58.863622 kubelet[2108]: W0517 00:34:58.863611 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:34:58.863711 kubelet[2108]: E0517 00:34:58.863625 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:34:59.052852 systemd[1]: run-containerd-runc-k8s.io-480fc4f8a08622c15a44d9ec425fb39ba0a1fc6935a9dbd52de5d43e5ab68983-runc.0AKIKG.mount: Deactivated successfully. May 17 00:34:59.498143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217227969.mount: Deactivated successfully. May 17 00:34:59.854929 kubelet[2108]: E0517 00:34:59.854851 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:00.570290 env[1307]: time="2025-05-17T00:35:00.570239376Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.572465 env[1307]: time="2025-05-17T00:35:00.572416967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.573896 env[1307]: time="2025-05-17T00:35:00.573850183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.575370 env[1307]: time="2025-05-17T00:35:00.575335145Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:00.575741 env[1307]: time="2025-05-17T00:35:00.575706786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:35:00.576835 env[1307]: time="2025-05-17T00:35:00.576787987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:35:00.590997 env[1307]: time="2025-05-17T00:35:00.590933097Z" level=info msg="CreateContainer within sandbox \"480fc4f8a08622c15a44d9ec425fb39ba0a1fc6935a9dbd52de5d43e5ab68983\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:35:00.603545 env[1307]: time="2025-05-17T00:35:00.603484248Z" level=info msg="CreateContainer within sandbox \"480fc4f8a08622c15a44d9ec425fb39ba0a1fc6935a9dbd52de5d43e5ab68983\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eb1ffb0cb9340a4c924897cfe60803c6a0d1d4d390dc2f3255211e4a7dfd5c80\"" May 17 00:35:00.604056 env[1307]: time="2025-05-17T00:35:00.604025740Z" level=info msg="StartContainer for \"eb1ffb0cb9340a4c924897cfe60803c6a0d1d4d390dc2f3255211e4a7dfd5c80\"" May 17 00:35:00.785220 env[1307]: time="2025-05-17T00:35:00.785145935Z" level=info msg="StartContainer for \"eb1ffb0cb9340a4c924897cfe60803c6a0d1d4d390dc2f3255211e4a7dfd5c80\" returns successfully" May 17 00:35:00.952055 kubelet[2108]: E0517 00:35:00.951954 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:00.972017 kubelet[2108]: E0517 00:35:00.971976 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.972017 kubelet[2108]: W0517 00:35:00.972001 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.972017 kubelet[2108]: E0517 00:35:00.972025 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.972379 kubelet[2108]: E0517 00:35:00.972357 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.972379 kubelet[2108]: W0517 00:35:00.972371 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.972379 kubelet[2108]: E0517 00:35:00.972380 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.972597 kubelet[2108]: E0517 00:35:00.972577 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.972799 kubelet[2108]: W0517 00:35:00.972597 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.972799 kubelet[2108]: E0517 00:35:00.972743 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.972968 kubelet[2108]: E0517 00:35:00.972948 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.972968 kubelet[2108]: W0517 00:35:00.972961 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.973018 kubelet[2108]: E0517 00:35:00.972972 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.973269 kubelet[2108]: E0517 00:35:00.973245 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.973269 kubelet[2108]: W0517 00:35:00.973260 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.973269 kubelet[2108]: E0517 00:35:00.973270 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.973470 kubelet[2108]: E0517 00:35:00.973452 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.973470 kubelet[2108]: W0517 00:35:00.973466 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.973547 kubelet[2108]: E0517 00:35:00.973475 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.973658 kubelet[2108]: E0517 00:35:00.973643 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.973658 kubelet[2108]: W0517 00:35:00.973655 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.973749 kubelet[2108]: E0517 00:35:00.973665 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.973943 kubelet[2108]: E0517 00:35:00.973917 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.973943 kubelet[2108]: W0517 00:35:00.973938 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.974029 kubelet[2108]: E0517 00:35:00.973962 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.974260 kubelet[2108]: E0517 00:35:00.974244 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.974260 kubelet[2108]: W0517 00:35:00.974257 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.974356 kubelet[2108]: E0517 00:35:00.974268 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.974519 kubelet[2108]: E0517 00:35:00.974503 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.974519 kubelet[2108]: W0517 00:35:00.974516 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.974598 kubelet[2108]: E0517 00:35:00.974528 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.974732 kubelet[2108]: E0517 00:35:00.974717 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.974732 kubelet[2108]: W0517 00:35:00.974729 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.974809 kubelet[2108]: E0517 00:35:00.974739 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.974971 kubelet[2108]: E0517 00:35:00.974956 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.974971 kubelet[2108]: W0517 00:35:00.974969 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.975096 kubelet[2108]: E0517 00:35:00.974980 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.975329 kubelet[2108]: E0517 00:35:00.975225 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.975329 kubelet[2108]: W0517 00:35:00.975235 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.975329 kubelet[2108]: E0517 00:35:00.975245 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.975585 kubelet[2108]: E0517 00:35:00.975485 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.975585 kubelet[2108]: W0517 00:35:00.975498 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.975585 kubelet[2108]: E0517 00:35:00.975508 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.975740 kubelet[2108]: E0517 00:35:00.975719 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.975740 kubelet[2108]: W0517 00:35:00.975734 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.975839 kubelet[2108]: E0517 00:35:00.975745 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.976114 kubelet[2108]: E0517 00:35:00.976066 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.976114 kubelet[2108]: W0517 00:35:00.976099 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.976114 kubelet[2108]: E0517 00:35:00.976117 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.976377 kubelet[2108]: E0517 00:35:00.976368 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.976413 kubelet[2108]: W0517 00:35:00.976380 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.976413 kubelet[2108]: E0517 00:35:00.976396 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.976626 kubelet[2108]: E0517 00:35:00.976610 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.976687 kubelet[2108]: W0517 00:35:00.976632 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.976687 kubelet[2108]: E0517 00:35:00.976646 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.976862 kubelet[2108]: E0517 00:35:00.976847 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.976862 kubelet[2108]: W0517 00:35:00.976857 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.976954 kubelet[2108]: E0517 00:35:00.976868 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.977115 kubelet[2108]: E0517 00:35:00.977047 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.977115 kubelet[2108]: W0517 00:35:00.977065 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.977115 kubelet[2108]: E0517 00:35:00.977099 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.977272 kubelet[2108]: E0517 00:35:00.977245 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.977272 kubelet[2108]: W0517 00:35:00.977259 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.977347 kubelet[2108]: E0517 00:35:00.977275 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.977497 kubelet[2108]: E0517 00:35:00.977480 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.977497 kubelet[2108]: W0517 00:35:00.977491 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.977497 kubelet[2108]: E0517 00:35:00.977499 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.977790 kubelet[2108]: E0517 00:35:00.977774 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.977790 kubelet[2108]: W0517 00:35:00.977788 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.977864 kubelet[2108]: E0517 00:35:00.977847 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.977955 kubelet[2108]: E0517 00:35:00.977940 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.977955 kubelet[2108]: W0517 00:35:00.977950 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.978023 kubelet[2108]: E0517 00:35:00.977989 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.978205 kubelet[2108]: E0517 00:35:00.978187 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.978205 kubelet[2108]: W0517 00:35:00.978199 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.978279 kubelet[2108]: E0517 00:35:00.978217 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.978396 kubelet[2108]: E0517 00:35:00.978382 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.978424 kubelet[2108]: W0517 00:35:00.978395 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.978424 kubelet[2108]: E0517 00:35:00.978414 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.978630 kubelet[2108]: E0517 00:35:00.978614 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.978630 kubelet[2108]: W0517 00:35:00.978624 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.978711 kubelet[2108]: E0517 00:35:00.978641 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.978866 kubelet[2108]: E0517 00:35:00.978849 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.978866 kubelet[2108]: W0517 00:35:00.978860 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.978941 kubelet[2108]: E0517 00:35:00.978876 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.979147 kubelet[2108]: E0517 00:35:00.979129 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.979147 kubelet[2108]: W0517 00:35:00.979141 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.979147 kubelet[2108]: E0517 00:35:00.979149 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.979294 kubelet[2108]: E0517 00:35:00.979279 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.979294 kubelet[2108]: W0517 00:35:00.979289 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.979347 kubelet[2108]: E0517 00:35:00.979298 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.979456 kubelet[2108]: E0517 00:35:00.979444 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.979456 kubelet[2108]: W0517 00:35:00.979455 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.979503 kubelet[2108]: E0517 00:35:00.979464 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.979932 kubelet[2108]: E0517 00:35:00.979906 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.979932 kubelet[2108]: W0517 00:35:00.979921 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.979932 kubelet[2108]: E0517 00:35:00.979931 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:00.980143 kubelet[2108]: E0517 00:35:00.980115 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:00.980143 kubelet[2108]: W0517 00:35:00.980131 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:00.980143 kubelet[2108]: E0517 00:35:00.980139 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.855039 kubelet[2108]: E0517 00:35:01.854976 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:01.953197 kubelet[2108]: I0517 00:35:01.953155 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:01.953663 kubelet[2108]: E0517 00:35:01.953542 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:01.974852 env[1307]: time="2025-05-17T00:35:01.974782198Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:01.977169 env[1307]: time="2025-05-17T00:35:01.977116352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:01.978860 env[1307]: time="2025-05-17T00:35:01.978834774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:01.980575 env[1307]: time="2025-05-17T00:35:01.980528701Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:01.980981 env[1307]: time="2025-05-17T00:35:01.980943854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:35:01.983056 env[1307]: time="2025-05-17T00:35:01.983004132Z" level=info msg="CreateContainer within sandbox \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:35:01.984165 kubelet[2108]: E0517 00:35:01.984145 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.984242 kubelet[2108]: W0517 00:35:01.984167 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.984242 kubelet[2108]: E0517 00:35:01.984192 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.984391 kubelet[2108]: E0517 00:35:01.984371 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.984391 kubelet[2108]: W0517 00:35:01.984382 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.984462 kubelet[2108]: E0517 00:35:01.984394 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.984538 kubelet[2108]: E0517 00:35:01.984525 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.984599 kubelet[2108]: W0517 00:35:01.984547 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.984599 kubelet[2108]: E0517 00:35:01.984564 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.985220 kubelet[2108]: E0517 00:35:01.985064 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.985220 kubelet[2108]: W0517 00:35:01.985094 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.985220 kubelet[2108]: E0517 00:35:01.985103 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.985318 kubelet[2108]: E0517 00:35:01.985249 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.985318 kubelet[2108]: W0517 00:35:01.985258 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.985318 kubelet[2108]: E0517 00:35:01.985268 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.985434 kubelet[2108]: E0517 00:35:01.985421 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.985434 kubelet[2108]: W0517 00:35:01.985430 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.985510 kubelet[2108]: E0517 00:35:01.985437 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.985614 kubelet[2108]: E0517 00:35:01.985599 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.985614 kubelet[2108]: W0517 00:35:01.985612 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.985684 kubelet[2108]: E0517 00:35:01.985623 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.985864 kubelet[2108]: E0517 00:35:01.985842 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.985864 kubelet[2108]: W0517 00:35:01.985855 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.985864 kubelet[2108]: E0517 00:35:01.985863 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.986100 kubelet[2108]: E0517 00:35:01.986085 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.986100 kubelet[2108]: W0517 00:35:01.986098 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.986173 kubelet[2108]: E0517 00:35:01.986110 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.986318 kubelet[2108]: E0517 00:35:01.986307 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.986345 kubelet[2108]: W0517 00:35:01.986317 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.986345 kubelet[2108]: E0517 00:35:01.986326 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.986513 kubelet[2108]: E0517 00:35:01.986502 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.986513 kubelet[2108]: W0517 00:35:01.986513 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.986596 kubelet[2108]: E0517 00:35:01.986521 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.986698 kubelet[2108]: E0517 00:35:01.986688 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.986724 kubelet[2108]: W0517 00:35:01.986698 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.986724 kubelet[2108]: E0517 00:35:01.986708 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.986892 kubelet[2108]: E0517 00:35:01.986883 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.986918 kubelet[2108]: W0517 00:35:01.986893 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.986918 kubelet[2108]: E0517 00:35:01.986904 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.987127 kubelet[2108]: E0517 00:35:01.987116 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.987160 kubelet[2108]: W0517 00:35:01.987128 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.987160 kubelet[2108]: E0517 00:35:01.987139 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.987337 kubelet[2108]: E0517 00:35:01.987327 2108 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:35:01.987363 kubelet[2108]: W0517 00:35:01.987338 2108 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:35:01.987363 kubelet[2108]: E0517 00:35:01.987347 2108 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:35:01.997868 env[1307]: time="2025-05-17T00:35:01.997826900Z" level=info msg="CreateContainer within sandbox \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"18471b8841c470cee6cf13e176d663b4b7b80da448ca70625d5766c730db94a3\"" May 17 00:35:01.998311 env[1307]: time="2025-05-17T00:35:01.998271159Z" level=info msg="StartContainer for \"18471b8841c470cee6cf13e176d663b4b7b80da448ca70625d5766c730db94a3\"" May 17 00:35:02.054180 env[1307]: time="2025-05-17T00:35:02.054099565Z" level=info msg="StartContainer for \"18471b8841c470cee6cf13e176d663b4b7b80da448ca70625d5766c730db94a3\" returns successfully" May 17 00:35:02.119556 env[1307]: time="2025-05-17T00:35:02.119422878Z" level=info msg="shim disconnected" id=18471b8841c470cee6cf13e176d663b4b7b80da448ca70625d5766c730db94a3 May 17 00:35:02.119556 env[1307]: time="2025-05-17T00:35:02.119477280Z" level=warning msg="cleaning up after shim disconnected" id=18471b8841c470cee6cf13e176d663b4b7b80da448ca70625d5766c730db94a3 namespace=k8s.io May 17 00:35:02.119556 env[1307]: time="2025-05-17T00:35:02.119487330Z" level=info msg="cleaning up dead shim" May 17 00:35:02.125863 env[1307]: time="2025-05-17T00:35:02.125820205Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2818 runtime=io.containerd.runc.v2\n" May 17 00:35:02.582006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18471b8841c470cee6cf13e176d663b4b7b80da448ca70625d5766c730db94a3-rootfs.mount: Deactivated successfully. May 17 00:35:02.956727 env[1307]: time="2025-05-17T00:35:02.956618957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:35:02.970267 kubelet[2108]: I0517 00:35:02.970198 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b7d75bf5d-48jf7" podStartSLOduration=3.549064928 podStartE2EDuration="5.970171721s" podCreationTimestamp="2025-05-17 00:34:57 +0000 UTC" firstStartedPulling="2025-05-17 00:34:58.155444077 +0000 UTC m=+15.373198180" lastFinishedPulling="2025-05-17 00:35:00.57655086 +0000 UTC m=+17.794304973" observedRunningTime="2025-05-17 00:35:00.96074504 +0000 UTC m=+18.178499143" watchObservedRunningTime="2025-05-17 00:35:02.970171721 +0000 UTC m=+20.187925824" May 17 00:35:03.854251 kubelet[2108]: E0517 00:35:03.854184 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:05.856647 kubelet[2108]: E0517 00:35:05.856588 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:07.009829 env[1307]: time="2025-05-17T00:35:07.009775641Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:07.011525 env[1307]: time="2025-05-17T00:35:07.011480432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:07.012832 env[1307]: time="2025-05-17T00:35:07.012806449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:07.014120 env[1307]: time="2025-05-17T00:35:07.014092712Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:07.014502 env[1307]: time="2025-05-17T00:35:07.014481825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:35:07.016550 env[1307]: time="2025-05-17T00:35:07.016520165Z" level=info msg="CreateContainer within sandbox \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:35:07.031876 env[1307]: time="2025-05-17T00:35:07.031840690Z" level=info msg="CreateContainer within sandbox \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"95c1af131a2b757c272da1159f429d82079267a38274d088478659641abf939a\"" May 17 00:35:07.032273 env[1307]: time="2025-05-17T00:35:07.032253839Z" level=info msg="StartContainer for \"95c1af131a2b757c272da1159f429d82079267a38274d088478659641abf939a\"" May 17 00:35:07.358559 env[1307]: time="2025-05-17T00:35:07.358504667Z" level=info msg="StartContainer for \"95c1af131a2b757c272da1159f429d82079267a38274d088478659641abf939a\" returns successfully" May 17 00:35:07.854711 kubelet[2108]: E0517 00:35:07.854674 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:08.088643 env[1307]: time="2025-05-17T00:35:08.088576779Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:35:08.104196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95c1af131a2b757c272da1159f429d82079267a38274d088478659641abf939a-rootfs.mount: Deactivated successfully. May 17 00:35:08.107063 env[1307]: time="2025-05-17T00:35:08.106979833Z" level=info msg="shim disconnected" id=95c1af131a2b757c272da1159f429d82079267a38274d088478659641abf939a May 17 00:35:08.107063 env[1307]: time="2025-05-17T00:35:08.107025720Z" level=warning msg="cleaning up after shim disconnected" id=95c1af131a2b757c272da1159f429d82079267a38274d088478659641abf939a namespace=k8s.io May 17 00:35:08.107063 env[1307]: time="2025-05-17T00:35:08.107037191Z" level=info msg="cleaning up dead shim" May 17 00:35:08.113190 env[1307]: time="2025-05-17T00:35:08.113156995Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:35:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2883 runtime=io.containerd.runc.v2\n" May 17 00:35:08.159456 kubelet[2108]: I0517 00:35:08.159433 2108 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:35:08.235657 kubelet[2108]: I0517 00:35:08.235595 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da68fa8b-b750-48ef-8ed0-edd244e098a4-tigera-ca-bundle\") pod \"calico-kube-controllers-8db7c4fcb-w875d\" (UID: \"da68fa8b-b750-48ef-8ed0-edd244e098a4\") " pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" May 17 00:35:08.235657 kubelet[2108]: I0517 00:35:08.235646 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffd190ac-644b-4bbf-bd2f-feed5f4c93a6-calico-apiserver-certs\") pod \"calico-apiserver-dd64f56db-gn2th\" (UID: \"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6\") " pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" May 17 00:35:08.235883 kubelet[2108]: I0517 00:35:08.235700 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmj5\" (UniqueName: \"kubernetes.io/projected/da68fa8b-b750-48ef-8ed0-edd244e098a4-kube-api-access-ltmj5\") pod \"calico-kube-controllers-8db7c4fcb-w875d\" (UID: \"da68fa8b-b750-48ef-8ed0-edd244e098a4\") " pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" May 17 00:35:08.235883 kubelet[2108]: I0517 00:35:08.235721 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45s6d\" (UniqueName: \"kubernetes.io/projected/ffd190ac-644b-4bbf-bd2f-feed5f4c93a6-kube-api-access-45s6d\") pod \"calico-apiserver-dd64f56db-gn2th\" (UID: \"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6\") " pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" May 17 00:35:08.336214 kubelet[2108]: I0517 00:35:08.336173 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915e2165-3634-409e-af91-ef9388cac59f-config\") pod \"goldmane-8f77d7b6c-zf9xd\" (UID: \"915e2165-3634-409e-af91-ef9388cac59f\") " pod="calico-system/goldmane-8f77d7b6c-zf9xd" May 17 00:35:08.336214 kubelet[2108]: I0517 00:35:08.336210 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckmv6\" (UniqueName: \"kubernetes.io/projected/915e2165-3634-409e-af91-ef9388cac59f-kube-api-access-ckmv6\") pod \"goldmane-8f77d7b6c-zf9xd\" (UID: \"915e2165-3634-409e-af91-ef9388cac59f\") " pod="calico-system/goldmane-8f77d7b6c-zf9xd" May 17 00:35:08.336214 kubelet[2108]: I0517 00:35:08.336228 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aecaa202-f800-4402-b8be-d457733677a8-config-volume\") pod \"coredns-7c65d6cfc9-p882x\" (UID: \"aecaa202-f800-4402-b8be-d457733677a8\") " pod="kube-system/coredns-7c65d6cfc9-p882x" May 17 00:35:08.336461 kubelet[2108]: I0517 00:35:08.336245 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfb5x\" (UniqueName: \"kubernetes.io/projected/aecaa202-f800-4402-b8be-d457733677a8-kube-api-access-qfb5x\") pod \"coredns-7c65d6cfc9-p882x\" (UID: \"aecaa202-f800-4402-b8be-d457733677a8\") " pod="kube-system/coredns-7c65d6cfc9-p882x" May 17 00:35:08.336461 kubelet[2108]: I0517 00:35:08.336262 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22h8\" (UniqueName: \"kubernetes.io/projected/380793ab-fb3a-44a5-9234-41b018bda4aa-kube-api-access-r22h8\") pod \"whisker-5cfb7c6489-cwwkf\" (UID: \"380793ab-fb3a-44a5-9234-41b018bda4aa\") " pod="calico-system/whisker-5cfb7c6489-cwwkf" May 17 00:35:08.336461 kubelet[2108]: I0517 00:35:08.336278 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/915e2165-3634-409e-af91-ef9388cac59f-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-zf9xd\" (UID: \"915e2165-3634-409e-af91-ef9388cac59f\") " pod="calico-system/goldmane-8f77d7b6c-zf9xd" May 17 00:35:08.336461 kubelet[2108]: I0517 00:35:08.336293 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/915e2165-3634-409e-af91-ef9388cac59f-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-zf9xd\" (UID: \"915e2165-3634-409e-af91-ef9388cac59f\") " pod="calico-system/goldmane-8f77d7b6c-zf9xd" May 17 00:35:08.336461 kubelet[2108]: I0517 00:35:08.336318 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-ca-bundle\") pod \"whisker-5cfb7c6489-cwwkf\" (UID: \"380793ab-fb3a-44a5-9234-41b018bda4aa\") " pod="calico-system/whisker-5cfb7c6489-cwwkf" May 17 00:35:08.336638 kubelet[2108]: I0517 00:35:08.336395 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhlq\" (UniqueName: \"kubernetes.io/projected/156114d2-bfb2-42a0-a77e-b4eed0e196ef-kube-api-access-qrhlq\") pod \"coredns-7c65d6cfc9-h6snv\" (UID: \"156114d2-bfb2-42a0-a77e-b4eed0e196ef\") " pod="kube-system/coredns-7c65d6cfc9-h6snv" May 17 00:35:08.336638 kubelet[2108]: I0517 00:35:08.336454 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b35c5827-c746-454d-b6bf-a8a0e8b71713-calico-apiserver-certs\") pod \"calico-apiserver-dd64f56db-z62dt\" (UID: \"b35c5827-c746-454d-b6bf-a8a0e8b71713\") " pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" May 17 00:35:08.336638 kubelet[2108]: I0517 00:35:08.336473 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/156114d2-bfb2-42a0-a77e-b4eed0e196ef-config-volume\") pod \"coredns-7c65d6cfc9-h6snv\" (UID: \"156114d2-bfb2-42a0-a77e-b4eed0e196ef\") " pod="kube-system/coredns-7c65d6cfc9-h6snv" May 17 00:35:08.336638 kubelet[2108]: I0517 00:35:08.336490 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-backend-key-pair\") pod \"whisker-5cfb7c6489-cwwkf\" (UID: \"380793ab-fb3a-44a5-9234-41b018bda4aa\") " pod="calico-system/whisker-5cfb7c6489-cwwkf" May 17 00:35:08.336638 kubelet[2108]: I0517 00:35:08.336527 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5bnj\" (UniqueName: \"kubernetes.io/projected/b35c5827-c746-454d-b6bf-a8a0e8b71713-kube-api-access-t5bnj\") pod \"calico-apiserver-dd64f56db-z62dt\" (UID: \"b35c5827-c746-454d-b6bf-a8a0e8b71713\") " pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" May 17 00:35:08.480577 kubelet[2108]: E0517 00:35:08.480452 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:08.481272 env[1307]: time="2025-05-17T00:35:08.481208913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h6snv,Uid:156114d2-bfb2-42a0-a77e-b4eed0e196ef,Namespace:kube-system,Attempt:0,}" May 17 00:35:08.483408 env[1307]: time="2025-05-17T00:35:08.483364261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8db7c4fcb-w875d,Uid:da68fa8b-b750-48ef-8ed0-edd244e098a4,Namespace:calico-system,Attempt:0,}" May 17 00:35:08.487388 kubelet[2108]: E0517 00:35:08.487315 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:08.488046 env[1307]: time="2025-05-17T00:35:08.488011482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-gn2th,Uid:ffd190ac-644b-4bbf-bd2f-feed5f4c93a6,Namespace:calico-apiserver,Attempt:0,}" May 17 00:35:08.488851 env[1307]: time="2025-05-17T00:35:08.488809725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p882x,Uid:aecaa202-f800-4402-b8be-d457733677a8,Namespace:kube-system,Attempt:0,}" May 17 00:35:08.494796 env[1307]: time="2025-05-17T00:35:08.494761453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-zf9xd,Uid:915e2165-3634-409e-af91-ef9388cac59f,Namespace:calico-system,Attempt:0,}" May 17 00:35:08.496334 env[1307]: time="2025-05-17T00:35:08.496303507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cfb7c6489-cwwkf,Uid:380793ab-fb3a-44a5-9234-41b018bda4aa,Namespace:calico-system,Attempt:0,}" May 17 00:35:08.496566 env[1307]: time="2025-05-17T00:35:08.496533420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-z62dt,Uid:b35c5827-c746-454d-b6bf-a8a0e8b71713,Namespace:calico-apiserver,Attempt:0,}" May 17 00:35:08.615548 env[1307]: time="2025-05-17T00:35:08.615474674Z" level=error msg="Failed to destroy network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.616028 env[1307]: time="2025-05-17T00:35:08.616001487Z" level=error msg="encountered an error cleaning up failed sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.616164 env[1307]: time="2025-05-17T00:35:08.616132844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h6snv,Uid:156114d2-bfb2-42a0-a77e-b4eed0e196ef,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.616837 kubelet[2108]: E0517 00:35:08.616483 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.616837 kubelet[2108]: E0517 00:35:08.616554 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h6snv" May 17 00:35:08.616837 kubelet[2108]: E0517 00:35:08.616575 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h6snv" May 17 00:35:08.616968 kubelet[2108]: E0517 00:35:08.616616 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-h6snv_kube-system(156114d2-bfb2-42a0-a77e-b4eed0e196ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-h6snv_kube-system(156114d2-bfb2-42a0-a77e-b4eed0e196ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h6snv" podUID="156114d2-bfb2-42a0-a77e-b4eed0e196ef" May 17 00:35:08.617133 env[1307]: time="2025-05-17T00:35:08.617102710Z" level=error msg="Failed to destroy network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.617440 env[1307]: time="2025-05-17T00:35:08.617412634Z" level=error msg="encountered an error cleaning up failed sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.617549 env[1307]: time="2025-05-17T00:35:08.617520918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p882x,Uid:aecaa202-f800-4402-b8be-d457733677a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.617928 kubelet[2108]: E0517 00:35:08.617829 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.617928 kubelet[2108]: E0517 00:35:08.617857 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p882x" May 17 00:35:08.617928 kubelet[2108]: E0517 00:35:08.617871 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p882x" May 17 00:35:08.618036 kubelet[2108]: E0517 00:35:08.617893 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p882x_kube-system(aecaa202-f800-4402-b8be-d457733677a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p882x_kube-system(aecaa202-f800-4402-b8be-d457733677a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p882x" podUID="aecaa202-f800-4402-b8be-d457733677a8" May 17 00:35:08.634233 env[1307]: time="2025-05-17T00:35:08.634170310Z" level=error msg="Failed to destroy network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.634544 env[1307]: time="2025-05-17T00:35:08.634506463Z" level=error msg="encountered an error cleaning up failed sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.634594 env[1307]: time="2025-05-17T00:35:08.634561927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-gn2th,Uid:ffd190ac-644b-4bbf-bd2f-feed5f4c93a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.634857 kubelet[2108]: E0517 00:35:08.634807 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.634919 kubelet[2108]: E0517 00:35:08.634883 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" May 17 00:35:08.634919 kubelet[2108]: E0517 00:35:08.634905 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" May 17 00:35:08.635007 kubelet[2108]: E0517 00:35:08.634970 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd64f56db-gn2th_calico-apiserver(ffd190ac-644b-4bbf-bd2f-feed5f4c93a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd64f56db-gn2th_calico-apiserver(ffd190ac-644b-4bbf-bd2f-feed5f4c93a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" podUID="ffd190ac-644b-4bbf-bd2f-feed5f4c93a6" May 17 00:35:08.638430 env[1307]: time="2025-05-17T00:35:08.638379306Z" level=error msg="Failed to destroy network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.639183 env[1307]: time="2025-05-17T00:35:08.639157872Z" level=error msg="encountered an error cleaning up failed sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.639307 env[1307]: time="2025-05-17T00:35:08.639277898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8db7c4fcb-w875d,Uid:da68fa8b-b750-48ef-8ed0-edd244e098a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.639719 kubelet[2108]: E0517 00:35:08.639569 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.639719 kubelet[2108]: E0517 00:35:08.639619 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" May 17 00:35:08.639719 kubelet[2108]: E0517 00:35:08.639636 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" May 17 00:35:08.639868 kubelet[2108]: E0517 00:35:08.639676 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8db7c4fcb-w875d_calico-system(da68fa8b-b750-48ef-8ed0-edd244e098a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8db7c4fcb-w875d_calico-system(da68fa8b-b750-48ef-8ed0-edd244e098a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" podUID="da68fa8b-b750-48ef-8ed0-edd244e098a4" May 17 00:35:08.647224 env[1307]: time="2025-05-17T00:35:08.647163687Z" level=error msg="Failed to destroy network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.647518 env[1307]: time="2025-05-17T00:35:08.647487387Z" level=error msg="encountered an error cleaning up failed sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.647568 env[1307]: time="2025-05-17T00:35:08.647534766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cfb7c6489-cwwkf,Uid:380793ab-fb3a-44a5-9234-41b018bda4aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.647941 kubelet[2108]: E0517 00:35:08.647695 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.647941 kubelet[2108]: E0517 00:35:08.647745 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cfb7c6489-cwwkf" May 17 00:35:08.647941 kubelet[2108]: E0517 00:35:08.647762 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cfb7c6489-cwwkf" May 17 00:35:08.648081 kubelet[2108]: E0517 00:35:08.647797 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cfb7c6489-cwwkf_calico-system(380793ab-fb3a-44a5-9234-41b018bda4aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cfb7c6489-cwwkf_calico-system(380793ab-fb3a-44a5-9234-41b018bda4aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cfb7c6489-cwwkf" podUID="380793ab-fb3a-44a5-9234-41b018bda4aa" May 17 00:35:08.652782 env[1307]: time="2025-05-17T00:35:08.652701626Z" level=error msg="Failed to destroy network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.653153 env[1307]: time="2025-05-17T00:35:08.653118381Z" level=error msg="encountered an error cleaning up failed sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.653198 env[1307]: time="2025-05-17T00:35:08.653171210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-zf9xd,Uid:915e2165-3634-409e-af91-ef9388cac59f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.653494 kubelet[2108]: E0517 00:35:08.653445 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.653553 kubelet[2108]: E0517 00:35:08.653516 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-zf9xd" May 17 00:35:08.653553 kubelet[2108]: E0517 00:35:08.653534 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-zf9xd" May 17 00:35:08.653722 kubelet[2108]: E0517 00:35:08.653625 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-zf9xd_calico-system(915e2165-3634-409e-af91-ef9388cac59f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-zf9xd_calico-system(915e2165-3634-409e-af91-ef9388cac59f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:35:08.660104 env[1307]: time="2025-05-17T00:35:08.660037921Z" level=error msg="Failed to destroy network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.660509 env[1307]: time="2025-05-17T00:35:08.660480464Z" level=error msg="encountered an error cleaning up failed sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.660677 env[1307]: time="2025-05-17T00:35:08.660624195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-z62dt,Uid:b35c5827-c746-454d-b6bf-a8a0e8b71713,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.660889 kubelet[2108]: E0517 00:35:08.660856 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:08.660935 kubelet[2108]: E0517 00:35:08.660914 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" May 17 00:35:08.660975 kubelet[2108]: E0517 00:35:08.660936 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" May 17 00:35:08.661019 kubelet[2108]: E0517 00:35:08.660985 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd64f56db-z62dt_calico-apiserver(b35c5827-c746-454d-b6bf-a8a0e8b71713)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd64f56db-z62dt_calico-apiserver(b35c5827-c746-454d-b6bf-a8a0e8b71713)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" podUID="b35c5827-c746-454d-b6bf-a8a0e8b71713" May 17 00:35:09.012108 kubelet[2108]: I0517 00:35:09.012052 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:09.013849 env[1307]: time="2025-05-17T00:35:09.012782699Z" level=info msg="StopPodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\"" May 17 00:35:09.015305 env[1307]: time="2025-05-17T00:35:09.015277766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:35:09.016123 kubelet[2108]: I0517 00:35:09.016100 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:09.017028 env[1307]: time="2025-05-17T00:35:09.016997414Z" level=info msg="StopPodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\"" May 17 00:35:09.017108 kubelet[2108]: I0517 00:35:09.016992 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:09.018110 env[1307]: time="2025-05-17T00:35:09.018059905Z" level=info msg="StopPodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\"" May 17 00:35:09.019010 kubelet[2108]: I0517 00:35:09.018678 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:09.019688 env[1307]: time="2025-05-17T00:35:09.019187017Z" level=info msg="StopPodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\"" May 17 00:35:09.020380 kubelet[2108]: I0517 00:35:09.020348 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:09.020845 env[1307]: time="2025-05-17T00:35:09.020817266Z" level=info msg="StopPodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\"" May 17 00:35:09.022228 kubelet[2108]: I0517 00:35:09.021743 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:09.022542 env[1307]: time="2025-05-17T00:35:09.022520031Z" level=info msg="StopPodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\"" May 17 00:35:09.023036 kubelet[2108]: I0517 00:35:09.022770 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:09.023164 env[1307]: time="2025-05-17T00:35:09.023139878Z" level=info msg="StopPodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\"" May 17 00:35:09.081660 env[1307]: time="2025-05-17T00:35:09.081585902Z" level=error msg="StopPodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" failed" error="failed to destroy network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.082263 kubelet[2108]: E0517 00:35:09.082208 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:09.082353 kubelet[2108]: E0517 00:35:09.082291 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c"} May 17 00:35:09.082403 kubelet[2108]: E0517 00:35:09.082360 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"156114d2-bfb2-42a0-a77e-b4eed0e196ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.082403 kubelet[2108]: E0517 00:35:09.082384 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"156114d2-bfb2-42a0-a77e-b4eed0e196ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h6snv" podUID="156114d2-bfb2-42a0-a77e-b4eed0e196ef" May 17 00:35:09.082577 env[1307]: time="2025-05-17T00:35:09.082505604Z" level=error msg="StopPodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" failed" error="failed to destroy network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.082746 kubelet[2108]: E0517 00:35:09.082714 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:09.082809 kubelet[2108]: E0517 00:35:09.082751 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7"} May 17 00:35:09.082809 kubelet[2108]: E0517 00:35:09.082782 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.082918 kubelet[2108]: E0517 00:35:09.082804 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" podUID="ffd190ac-644b-4bbf-bd2f-feed5f4c93a6" May 17 00:35:09.083139 env[1307]: time="2025-05-17T00:35:09.083091666Z" level=error msg="StopPodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" failed" error="failed to destroy network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.083271 kubelet[2108]: E0517 00:35:09.083233 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:09.083271 kubelet[2108]: E0517 00:35:09.083270 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449"} May 17 00:35:09.083366 kubelet[2108]: E0517 00:35:09.083296 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"915e2165-3634-409e-af91-ef9388cac59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.083366 kubelet[2108]: E0517 00:35:09.083322 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"915e2165-3634-409e-af91-ef9388cac59f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:35:09.085439 env[1307]: time="2025-05-17T00:35:09.085379885Z" level=error msg="StopPodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" failed" error="failed to destroy network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.085552 kubelet[2108]: E0517 00:35:09.085519 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:09.085552 kubelet[2108]: E0517 00:35:09.085551 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93"} May 17 00:35:09.085654 kubelet[2108]: E0517 00:35:09.085571 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aecaa202-f800-4402-b8be-d457733677a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.085654 kubelet[2108]: E0517 00:35:09.085594 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aecaa202-f800-4402-b8be-d457733677a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p882x" podUID="aecaa202-f800-4402-b8be-d457733677a8" May 17 00:35:09.089863 env[1307]: time="2025-05-17T00:35:09.089799986Z" level=error msg="StopPodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" failed" error="failed to destroy network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.090559 kubelet[2108]: E0517 00:35:09.090517 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:09.090648 kubelet[2108]: E0517 00:35:09.090575 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7"} May 17 00:35:09.090648 kubelet[2108]: E0517 00:35:09.090608 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"380793ab-fb3a-44a5-9234-41b018bda4aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.090648 kubelet[2108]: E0517 00:35:09.090635 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"380793ab-fb3a-44a5-9234-41b018bda4aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cfb7c6489-cwwkf" podUID="380793ab-fb3a-44a5-9234-41b018bda4aa" May 17 00:35:09.096221 env[1307]: time="2025-05-17T00:35:09.096158557Z" level=error msg="StopPodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" failed" error="failed to destroy network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.096440 kubelet[2108]: E0517 00:35:09.096397 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:09.096506 kubelet[2108]: E0517 00:35:09.096457 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da"} May 17 00:35:09.096506 kubelet[2108]: E0517 00:35:09.096497 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b35c5827-c746-454d-b6bf-a8a0e8b71713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.096616 kubelet[2108]: E0517 00:35:09.096524 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b35c5827-c746-454d-b6bf-a8a0e8b71713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" podUID="b35c5827-c746-454d-b6bf-a8a0e8b71713" May 17 00:35:09.099092 env[1307]: time="2025-05-17T00:35:09.099014434Z" level=error msg="StopPodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" failed" error="failed to destroy network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.099291 kubelet[2108]: E0517 00:35:09.099250 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:09.099338 kubelet[2108]: E0517 00:35:09.099291 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d"} May 17 00:35:09.099338 kubelet[2108]: E0517 00:35:09.099316 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da68fa8b-b750-48ef-8ed0-edd244e098a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:09.099425 kubelet[2108]: E0517 00:35:09.099334 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da68fa8b-b750-48ef-8ed0-edd244e098a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" podUID="da68fa8b-b750-48ef-8ed0-edd244e098a4" May 17 00:35:09.857617 env[1307]: time="2025-05-17T00:35:09.857568830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gb94f,Uid:7a10bef1-407b-40ca-9b52-a14544f402bf,Namespace:calico-system,Attempt:0,}" May 17 00:35:09.908858 env[1307]: time="2025-05-17T00:35:09.908781076Z" level=error msg="Failed to destroy network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.909241 env[1307]: time="2025-05-17T00:35:09.909196358Z" level=error msg="encountered an error cleaning up failed sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.909300 env[1307]: time="2025-05-17T00:35:09.909252223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gb94f,Uid:7a10bef1-407b-40ca-9b52-a14544f402bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.909513 kubelet[2108]: E0517 00:35:09.909472 2108 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:09.909599 kubelet[2108]: E0517 00:35:09.909540 2108 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gb94f" May 17 00:35:09.909599 kubelet[2108]: E0517 00:35:09.909560 2108 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gb94f" May 17 00:35:09.909770 kubelet[2108]: E0517 00:35:09.909613 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gb94f_calico-system(7a10bef1-407b-40ca-9b52-a14544f402bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gb94f_calico-system(7a10bef1-407b-40ca-9b52-a14544f402bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:09.911170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1-shm.mount: Deactivated successfully. May 17 00:35:10.025283 kubelet[2108]: I0517 00:35:10.025239 2108 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:10.025819 env[1307]: time="2025-05-17T00:35:10.025784673Z" level=info msg="StopPodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\"" May 17 00:35:10.048938 env[1307]: time="2025-05-17T00:35:10.048855559Z" level=error msg="StopPodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" failed" error="failed to destroy network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:35:10.049149 kubelet[2108]: E0517 00:35:10.049113 2108 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:10.049210 kubelet[2108]: E0517 00:35:10.049160 2108 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1"} May 17 00:35:10.049210 kubelet[2108]: E0517 00:35:10.049194 2108 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a10bef1-407b-40ca-9b52-a14544f402bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:35:10.049301 kubelet[2108]: E0517 00:35:10.049215 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a10bef1-407b-40ca-9b52-a14544f402bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gb94f" podUID="7a10bef1-407b-40ca-9b52-a14544f402bf" May 17 00:35:12.583305 kubelet[2108]: I0517 00:35:12.582799 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:12.583802 kubelet[2108]: E0517 00:35:12.583775 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:12.612106 kernel: kauditd_printk_skb: 19 callbacks suppressed May 17 00:35:12.612248 kernel: audit: type=1325 audit(1747442112.608:296): table=filter:97 family=2 entries=21 op=nft_register_rule pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:12.608000 audit[3322]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:12.608000 audit[3322]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee81a1e70 a2=0 a3=7ffee81a1e5c items=0 ppid=2253 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:12.620707 kernel: audit: type=1300 audit(1747442112.608:296): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee81a1e70 a2=0 a3=7ffee81a1e5c items=0 ppid=2253 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:12.620801 kernel: audit: type=1327 audit(1747442112.608:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:12.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:12.628000 audit[3322]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:12.628000 audit[3322]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee81a1e70 a2=0 a3=7ffee81a1e5c items=0 ppid=2253 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:12.637734 kernel: audit: type=1325 audit(1747442112.628:297): table=nat:98 family=2 entries=19 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:12.637777 kernel: audit: type=1300 audit(1747442112.628:297): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee81a1e70 a2=0 a3=7ffee81a1e5c items=0 ppid=2253 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:12.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:12.640842 kernel: audit: type=1327 audit(1747442112.628:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:13.034638 kubelet[2108]: E0517 00:35:13.034519 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:17.451664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252038756.mount: Deactivated successfully. May 17 00:35:18.500564 env[1307]: time="2025-05-17T00:35:18.500506878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.504052 env[1307]: time="2025-05-17T00:35:18.504006568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.505758 env[1307]: time="2025-05-17T00:35:18.505713446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.507483 env[1307]: time="2025-05-17T00:35:18.507448668Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:18.507771 env[1307]: time="2025-05-17T00:35:18.507738473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:35:18.519349 env[1307]: time="2025-05-17T00:35:18.519293595Z" level=info msg="CreateContainer within sandbox \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:35:18.535915 env[1307]: time="2025-05-17T00:35:18.535861612Z" level=info msg="CreateContainer within sandbox \"6949292d380d867c1fa5bca1d327d2427491d52869b02f31729f999985c22939\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c82a5cb6c18611a3e8b7310169d26b9967a15b32715a0322a9ed2500fde1e8f\"" May 17 00:35:18.536468 env[1307]: time="2025-05-17T00:35:18.536439088Z" level=info msg="StartContainer for \"7c82a5cb6c18611a3e8b7310169d26b9967a15b32715a0322a9ed2500fde1e8f\"" May 17 00:35:18.579939 env[1307]: time="2025-05-17T00:35:18.579874782Z" level=info msg="StartContainer for \"7c82a5cb6c18611a3e8b7310169d26b9967a15b32715a0322a9ed2500fde1e8f\" returns successfully" May 17 00:35:18.725533 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:35:18.725688 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:35:18.799247 env[1307]: time="2025-05-17T00:35:18.799184576Z" level=info msg="StopPodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\"" May 17 00:35:18.892760 kernel: audit: type=1130 audit(1747442118.885:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:43394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:18.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:43394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:18.886680 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:43394.service. May 17 00:35:18.927000 audit[3402]: USER_ACCT pid=3402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.930368 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 43394 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:18.932000 audit[3402]: CRED_ACQ pid=3402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.933934 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.852 [INFO][3387] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.852 [INFO][3387] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" iface="eth0" netns="/var/run/netns/cni-7bf45826-5956-2ec4-eebe-c8b31a0c8c9b" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.852 [INFO][3387] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" iface="eth0" netns="/var/run/netns/cni-7bf45826-5956-2ec4-eebe-c8b31a0c8c9b" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.852 [INFO][3387] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" iface="eth0" netns="/var/run/netns/cni-7bf45826-5956-2ec4-eebe-c8b31a0c8c9b" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.853 [INFO][3387] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.853 [INFO][3387] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.921 [INFO][3396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.921 [INFO][3396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.922 [INFO][3396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.930 [WARNING][3396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.931 [INFO][3396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.932 [INFO][3396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:18.936615 env[1307]: 2025-05-17 00:35:18.934 [INFO][3387] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:18.937286 kernel: audit: type=1101 audit(1747442118.927:299): pid=3402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.937354 kernel: audit: type=1103 audit(1747442118.932:300): pid=3402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.937373 kernel: audit: type=1006 audit(1747442118.932:301): pid=3402 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 17 00:35:18.937682 env[1307]: time="2025-05-17T00:35:18.937634930Z" level=info msg="TearDown network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" successfully" May 17 00:35:18.937744 env[1307]: time="2025-05-17T00:35:18.937680977Z" level=info msg="StopPodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" returns successfully" May 17 00:35:18.944586 kernel: audit: type=1300 audit(1747442118.932:301): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee41d81c0 a2=3 a3=0 items=0 ppid=1 pid=3402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:18.932000 audit[3402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee41d81c0 a2=3 a3=0 items=0 ppid=1 pid=3402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:18.943465 systemd[1]: Started session-8.scope. May 17 00:35:18.943837 systemd-logind[1292]: New session 8 of user core. May 17 00:35:18.932000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:18.947094 kernel: audit: type=1327 audit(1747442118.932:301): proctitle=737368643A20636F7265205B707269765D May 17 00:35:18.949000 audit[3402]: USER_START pid=3402 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.950000 audit[3409]: CRED_ACQ pid=3409 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.958918 kernel: audit: type=1105 audit(1747442118.949:302): pid=3402 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:18.958971 kernel: audit: type=1103 audit(1747442118.950:303): pid=3409 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:19.071216 sshd[3402]: pam_unix(sshd:session): session closed for user core May 17 00:35:19.071000 audit[3402]: USER_END pid=3402 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:19.073442 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:43394.service: Deactivated successfully. May 17 00:35:19.074186 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:35:19.071000 audit[3402]: CRED_DISP pid=3402 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:19.080112 kernel: audit: type=1106 audit(1747442119.071:304): pid=3402 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:19.080169 kernel: audit: type=1104 audit(1747442119.071:305): pid=3402 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:19.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:43394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:19.080571 systemd-logind[1292]: Session 8 logged out. Waiting for processes to exit. May 17 00:35:19.081442 systemd-logind[1292]: Removed session 8. May 17 00:35:19.106543 kubelet[2108]: I0517 00:35:19.106510 2108 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r22h8\" (UniqueName: \"kubernetes.io/projected/380793ab-fb3a-44a5-9234-41b018bda4aa-kube-api-access-r22h8\") pod \"380793ab-fb3a-44a5-9234-41b018bda4aa\" (UID: \"380793ab-fb3a-44a5-9234-41b018bda4aa\") " May 17 00:35:19.106883 kubelet[2108]: I0517 00:35:19.106560 2108 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-backend-key-pair\") pod \"380793ab-fb3a-44a5-9234-41b018bda4aa\" (UID: \"380793ab-fb3a-44a5-9234-41b018bda4aa\") " May 17 00:35:19.106883 kubelet[2108]: I0517 00:35:19.106578 2108 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-ca-bundle\") pod \"380793ab-fb3a-44a5-9234-41b018bda4aa\" (UID: \"380793ab-fb3a-44a5-9234-41b018bda4aa\") " May 17 00:35:19.106941 kubelet[2108]: I0517 00:35:19.106911 2108 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "380793ab-fb3a-44a5-9234-41b018bda4aa" (UID: "380793ab-fb3a-44a5-9234-41b018bda4aa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:35:19.108715 kubelet[2108]: I0517 00:35:19.108672 2108 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "380793ab-fb3a-44a5-9234-41b018bda4aa" (UID: "380793ab-fb3a-44a5-9234-41b018bda4aa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:35:19.109265 kubelet[2108]: I0517 00:35:19.109231 2108 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380793ab-fb3a-44a5-9234-41b018bda4aa-kube-api-access-r22h8" (OuterVolumeSpecName: "kube-api-access-r22h8") pod "380793ab-fb3a-44a5-9234-41b018bda4aa" (UID: "380793ab-fb3a-44a5-9234-41b018bda4aa"). InnerVolumeSpecName "kube-api-access-r22h8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:35:19.207562 kubelet[2108]: I0517 00:35:19.207529 2108 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 17 00:35:19.207562 kubelet[2108]: I0517 00:35:19.207554 2108 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/380793ab-fb3a-44a5-9234-41b018bda4aa-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 17 00:35:19.207562 kubelet[2108]: I0517 00:35:19.207563 2108 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r22h8\" (UniqueName: \"kubernetes.io/projected/380793ab-fb3a-44a5-9234-41b018bda4aa-kube-api-access-r22h8\") on node \"localhost\" DevicePath \"\"" May 17 00:35:19.473159 kubelet[2108]: I0517 00:35:19.471811 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z7g47" podStartSLOduration=1.782790291 podStartE2EDuration="21.471754576s" podCreationTimestamp="2025-05-17 00:34:58 +0000 UTC" firstStartedPulling="2025-05-17 00:34:58.81968474 +0000 UTC m=+16.037438853" lastFinishedPulling="2025-05-17 00:35:18.508649025 +0000 UTC m=+35.726403138" observedRunningTime="2025-05-17 00:35:19.46978295 +0000 UTC m=+36.687537083" watchObservedRunningTime="2025-05-17 00:35:19.471754576 +0000 UTC m=+36.689508689" May 17 00:35:19.514036 systemd[1]: run-netns-cni\x2d7bf45826\x2d5956\x2d2ec4\x2deebe\x2dc8b31a0c8c9b.mount: Deactivated successfully. May 17 00:35:19.514186 systemd[1]: var-lib-kubelet-pods-380793ab\x2dfb3a\x2d44a5\x2d9234\x2d41b018bda4aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr22h8.mount: Deactivated successfully. May 17 00:35:19.514277 systemd[1]: var-lib-kubelet-pods-380793ab\x2dfb3a\x2d44a5\x2d9234\x2d41b018bda4aa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:35:19.609037 kubelet[2108]: I0517 00:35:19.608982 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664372a8-af86-4567-a233-d8be21950e7b-whisker-ca-bundle\") pod \"whisker-55d44b8df7-v6qvx\" (UID: \"664372a8-af86-4567-a233-d8be21950e7b\") " pod="calico-system/whisker-55d44b8df7-v6qvx" May 17 00:35:19.609037 kubelet[2108]: I0517 00:35:19.609043 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbb7\" (UniqueName: \"kubernetes.io/projected/664372a8-af86-4567-a233-d8be21950e7b-kube-api-access-wrbb7\") pod \"whisker-55d44b8df7-v6qvx\" (UID: \"664372a8-af86-4567-a233-d8be21950e7b\") " pod="calico-system/whisker-55d44b8df7-v6qvx" May 17 00:35:19.609279 kubelet[2108]: I0517 00:35:19.609105 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/664372a8-af86-4567-a233-d8be21950e7b-whisker-backend-key-pair\") pod \"whisker-55d44b8df7-v6qvx\" (UID: \"664372a8-af86-4567-a233-d8be21950e7b\") " pod="calico-system/whisker-55d44b8df7-v6qvx" May 17 00:35:19.777640 env[1307]: time="2025-05-17T00:35:19.777516195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d44b8df7-v6qvx,Uid:664372a8-af86-4567-a233-d8be21950e7b,Namespace:calico-system,Attempt:0,}" May 17 00:35:20.060000 audit[3495]: AVC avc: denied { write } for pid=3495 comm="tee" name="fd" dev="proc" ino=26702 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.062000 audit[3506]: AVC avc: denied { write } for pid=3506 comm="tee" name="fd" dev="proc" ino=23991 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.062000 audit[3506]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd29c77e9 a2=241 a3=1b6 items=1 ppid=3453 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.062000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 00:35:20.062000 audit: PATH item=0 name="/dev/fd/63" inode=26698 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.063000 audit[3511]: AVC avc: denied { write } for pid=3511 comm="tee" name="fd" dev="proc" ino=25785 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.063000 audit[3511]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff8911e7e9 a2=241 a3=1b6 items=1 ppid=3450 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.063000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 00:35:20.063000 audit: PATH item=0 name="/dev/fd/63" inode=23993 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.063000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.060000 audit[3495]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe897597eb a2=241 a3=1b6 items=1 ppid=3469 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.060000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 00:35:20.060000 audit: PATH item=0 name="/dev/fd/63" inode=26693 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.069000 audit[3509]: AVC avc: denied { write } for pid=3509 comm="tee" name="fd" dev="proc" ino=26718 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.069000 audit[3509]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc80bb37d9 a2=241 a3=1b6 items=1 ppid=3454 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.069000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 00:35:20.069000 audit: PATH item=0 name="/dev/fd/63" inode=26699 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.079000 audit[3521]: AVC avc: denied { write } for pid=3521 comm="tee" name="fd" dev="proc" ino=25790 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.079000 audit[3521]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd10d2e7e9 a2=241 a3=1b6 items=1 ppid=3461 pid=3521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.079000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 00:35:20.079000 audit: PATH item=0 name="/dev/fd/63" inode=26721 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.069000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.140000 audit[3540]: AVC avc: denied { write } for pid=3540 comm="tee" name="fd" dev="proc" ino=23996 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.140000 audit[3540]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff26cf17ea a2=241 a3=1b6 items=1 ppid=3471 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.140000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 00:35:20.140000 audit: PATH item=0 name="/dev/fd/63" inode=24890 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.142000 audit[3550]: AVC avc: denied { write } for pid=3550 comm="tee" name="fd" dev="proc" ino=24903 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:35:20.142000 audit[3550]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdcb01d7da a2=241 a3=1b6 items=1 ppid=3466 pid=3550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.142000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:35:20.142000 audit: PATH item=0 name="/dev/fd/63" inode=26728 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:35:20.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:35:20.225415 systemd-networkd[1082]: calie1ecf6d6611: Link UP May 17 00:35:20.228664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:35:20.228788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie1ecf6d6611: link becomes ready May 17 00:35:20.228953 systemd-networkd[1082]: calie1ecf6d6611: Gained carrier May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.081 [INFO][3437] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.111 [INFO][3437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55d44b8df7--v6qvx-eth0 whisker-55d44b8df7- calico-system 664372a8-af86-4567-a233-d8be21950e7b 969 0 2025-05-17 00:35:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55d44b8df7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55d44b8df7-v6qvx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie1ecf6d6611 [] [] }} ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.111 [INFO][3437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.155 [INFO][3543] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" HandleID="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Workload="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.155 [INFO][3543] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" HandleID="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Workload="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55d44b8df7-v6qvx", "timestamp":"2025-05-17 00:35:20.155101488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.155 [INFO][3543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.155 [INFO][3543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.155 [INFO][3543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.167 [INFO][3543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.182 [INFO][3543] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.185 [INFO][3543] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.187 [INFO][3543] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.188 [INFO][3543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.188 [INFO][3543] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.192 [INFO][3543] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4 May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.199 [INFO][3543] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.211 [INFO][3543] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.211 [INFO][3543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" host="localhost" May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.211 [INFO][3543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:20.235258 env[1307]: 2025-05-17 00:35:20.211 [INFO][3543] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" HandleID="k8s-pod-network.1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Workload="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.235816 env[1307]: 2025-05-17 00:35:20.214 [INFO][3437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55d44b8df7--v6qvx-eth0", GenerateName:"whisker-55d44b8df7-", Namespace:"calico-system", SelfLink:"", UID:"664372a8-af86-4567-a233-d8be21950e7b", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55d44b8df7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55d44b8df7-v6qvx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1ecf6d6611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:20.235816 env[1307]: 2025-05-17 00:35:20.214 [INFO][3437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.235816 env[1307]: 2025-05-17 00:35:20.214 [INFO][3437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1ecf6d6611 ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.235816 env[1307]: 2025-05-17 00:35:20.225 [INFO][3437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.235816 env[1307]: 2025-05-17 00:35:20.225 [INFO][3437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55d44b8df7--v6qvx-eth0", GenerateName:"whisker-55d44b8df7-", Namespace:"calico-system", SelfLink:"", UID:"664372a8-af86-4567-a233-d8be21950e7b", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 35, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55d44b8df7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4", Pod:"whisker-55d44b8df7-v6qvx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1ecf6d6611", MAC:"da:a6:53:56:cf:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:20.235816 env[1307]: 2025-05-17 00:35:20.232 [INFO][3437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4" Namespace="calico-system" Pod="whisker-55d44b8df7-v6qvx" WorkloadEndpoint="localhost-k8s-whisker--55d44b8df7--v6qvx-eth0" May 17 00:35:20.251010 env[1307]: time="2025-05-17T00:35:20.250647847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:20.251010 env[1307]: time="2025-05-17T00:35:20.250713099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:20.251010 env[1307]: time="2025-05-17T00:35:20.250733628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:20.251222 env[1307]: time="2025-05-17T00:35:20.251001491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4 pid=3582 runtime=io.containerd.runc.v2 May 17 00:35:20.275052 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:20.304502 env[1307]: time="2025-05-17T00:35:20.304449527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d44b8df7-v6qvx,Uid:664372a8-af86-4567-a233-d8be21950e7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1cd6aaf5f03cb74bb7ade04cbe2974f8e3b07a373a4b5c5f186923658e312fb4\"" May 17 00:35:20.305867 env[1307]: time="2025-05-17T00:35:20.305836924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:35:20.313000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.313000 audit: BPF prog-id=10 op=LOAD May 17 00:35:20.313000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7b9f1fc0 a2=98 a3=3 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.313000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.314000 audit: BPF prog-id=10 op=UNLOAD May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit: BPF prog-id=11 op=LOAD May 17 00:35:20.314000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7b9f1db0 a2=94 a3=54428f items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.314000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.314000 audit: BPF prog-id=11 op=UNLOAD May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.314000 audit: BPF prog-id=12 op=LOAD May 17 00:35:20.314000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7b9f1de0 a2=94 a3=2 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.314000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.314000 audit: BPF prog-id=12 op=UNLOAD May 17 00:35:20.414000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit: BPF prog-id=13 op=LOAD May 17 00:35:20.414000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe7b9f1ca0 a2=94 a3=1 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.414000 audit: BPF prog-id=13 op=UNLOAD May 17 00:35:20.414000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.414000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe7b9f1d70 a2=50 a3=7ffe7b9f1e50 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7b9f1cb0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7b9f1ce0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7b9f1bf0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7b9f1d00 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7b9f1ce0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7b9f1cd0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7b9f1d00 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7b9f1ce0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.421000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.421000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7b9f1d00 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe7b9f1cd0 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe7b9f1d40 a2=28 a3=0 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe7b9f1af0 a2=50 a3=1 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit: BPF prog-id=14 op=LOAD May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe7b9f1af0 a2=94 a3=5 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit: BPF prog-id=14 op=UNLOAD May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe7b9f1ba0 a2=50 a3=1 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe7b9f1cc0 a2=4 a3=38 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { confidentiality } for pid=3634 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe7b9f1d10 a2=94 a3=6 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { confidentiality } for pid=3634 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe7b9f14c0 a2=94 a3=88 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { perfmon } for pid=3634 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { bpf } for pid=3634 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.422000 audit[3634]: AVC avc: denied { confidentiality } for pid=3634 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:35:20.422000 audit[3634]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe7b9f14c0 a2=94 a3=88 items=0 ppid=3463 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.422000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit: BPF prog-id=15 op=LOAD May 17 00:35:20.429000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd898201c0 a2=98 a3=1999999999999999 items=0 ppid=3463 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.429000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:35:20.429000 audit: BPF prog-id=15 op=UNLOAD May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit: BPF prog-id=16 op=LOAD May 17 00:35:20.429000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd898200a0 a2=94 a3=ffff items=0 ppid=3463 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.429000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:35:20.429000 audit: BPF prog-id=16 op=UNLOAD May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { perfmon } for pid=3637 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit[3637]: AVC avc: denied { bpf } for pid=3637 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.429000 audit: BPF prog-id=17 op=LOAD May 17 00:35:20.429000 audit[3637]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd898200e0 a2=94 a3=7ffd898202c0 items=0 ppid=3463 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.429000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:35:20.429000 audit: BPF prog-id=17 op=UNLOAD May 17 00:35:20.440289 kubelet[2108]: I0517 00:35:20.440255 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:20.475915 systemd-networkd[1082]: vxlan.calico: Link UP May 17 00:35:20.475925 systemd-networkd[1082]: vxlan.calico: Gained carrier May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit: BPF prog-id=18 op=LOAD May 17 00:35:20.487000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd285e4b0 a2=98 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.487000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.487000 audit: BPF prog-id=18 op=UNLOAD May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.487000 audit: BPF prog-id=19 op=LOAD May 17 00:35:20.487000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd285e2c0 a2=94 a3=54428f items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.487000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit: BPF prog-id=19 op=UNLOAD May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit: BPF prog-id=20 op=LOAD May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdd285e2f0 a2=94 a3=2 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit: BPF prog-id=20 op=UNLOAD May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdd285e1c0 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd285e1f0 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd285e100 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdd285e210 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdd285e1f0 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdd285e1e0 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdd285e210 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd285e1f0 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd285e210 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdd285e1e0 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdd285e250 a2=28 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit: BPF prog-id=21 op=LOAD May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd285e0c0 a2=94 a3=0 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.488000 audit: BPF prog-id=21 op=UNLOAD May 17 00:35:20.488000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.488000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffdd285e0b0 a2=50 a3=2800 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.488000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffdd285e0b0 a2=50 a3=2800 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.489000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit: BPF prog-id=22 op=LOAD May 17 00:35:20.489000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd285d8d0 a2=94 a3=2 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.489000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.489000 audit: BPF prog-id=22 op=UNLOAD May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { bpf } for pid=3662 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: AVC avc: denied { perfmon } for pid=3662 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.489000 audit[3662]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdd285d9d0 a2=94 a3=30 items=0 ppid=3463 pid=3662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.489000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit: BPF prog-id=24 op=LOAD May 17 00:35:20.491000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc55f024c0 a2=98 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.491000 audit: BPF prog-id=24 op=UNLOAD May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit: BPF prog-id=25 op=LOAD May 17 00:35:20.491000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc55f022b0 a2=94 a3=54428f items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.491000 audit: BPF prog-id=25 op=UNLOAD May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.491000 audit: BPF prog-id=26 op=LOAD May 17 00:35:20.491000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc55f022e0 a2=94 a3=2 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.492000 audit: BPF prog-id=26 op=UNLOAD May 17 00:35:20.520374 env[1307]: time="2025-05-17T00:35:20.520309333Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:35:20.521398 env[1307]: time="2025-05-17T00:35:20.521350099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:35:20.521627 kubelet[2108]: E0517 00:35:20.521578 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:20.521709 kubelet[2108]: E0517 00:35:20.521647 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:20.522826 kubelet[2108]: E0517 00:35:20.522755 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3da8b4fd3b234db5a0b65fe67fcf7d29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d44b8df7-v6qvx_calico-system(664372a8-af86-4567-a233-d8be21950e7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:35:20.524950 env[1307]: time="2025-05-17T00:35:20.524689686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:35:20.596000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit: BPF prog-id=27 op=LOAD May 17 00:35:20.596000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc55f021a0 a2=94 a3=1 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.596000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.596000 audit: BPF prog-id=27 op=UNLOAD May 17 00:35:20.596000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.596000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc55f02270 a2=50 a3=7ffc55f02350 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.596000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55f021b0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55f021e0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55f020f0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55f02200 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55f021e0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55f021d0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55f02200 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55f021e0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55f02200 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc55f021d0 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.605000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.605000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc55f02240 a2=28 a3=0 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.605000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc55f01ff0 a2=50 a3=1 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit: BPF prog-id=28 op=LOAD May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc55f01ff0 a2=94 a3=5 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit: BPF prog-id=28 op=UNLOAD May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc55f020a0 a2=50 a3=1 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc55f021c0 a2=4 a3=38 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { confidentiality } for pid=3666 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc55f02210 a2=94 a3=6 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { confidentiality } for pid=3666 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc55f019c0 a2=94 a3=88 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { perfmon } for pid=3666 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { confidentiality } for pid=3666 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc55f019c0 a2=94 a3=88 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.606000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.606000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55f033f0 a2=10 a3=208 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.606000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.607000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.607000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55f03290 a2=10 a3=3 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.607000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.607000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.607000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55f03230 a2=10 a3=3 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.607000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.607000 audit[3666]: AVC avc: denied { bpf } for pid=3666 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:35:20.607000 audit[3666]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc55f03230 a2=10 a3=7 items=0 ppid=3463 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.607000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:35:20.614000 audit: BPF prog-id=23 op=UNLOAD May 17 00:35:20.653000 audit[3696]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=3696 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:20.653000 audit[3696]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffedbea550 a2=0 a3=7fffedbea53c items=0 ppid=3463 pid=3696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.653000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:20.655000 audit[3695]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=3695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:20.655000 audit[3695]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd88a4d390 a2=0 a3=7ffd88a4d37c items=0 ppid=3463 pid=3695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.655000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:20.660000 audit[3694]: NETFILTER_CFG table=raw:101 family=2 entries=21 op=nft_register_chain pid=3694 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:20.660000 audit[3694]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe52abba60 a2=0 a3=7ffe52abba4c items=0 ppid=3463 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.660000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:20.663000 audit[3699]: NETFILTER_CFG table=filter:102 family=2 entries=94 op=nft_register_chain pid=3699 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:20.663000 audit[3699]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffd13ce1840 a2=0 a3=7ffd13ce182c items=0 ppid=3463 pid=3699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:20.663000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:20.758428 env[1307]: time="2025-05-17T00:35:20.758336372Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:35:20.759437 env[1307]: time="2025-05-17T00:35:20.759398438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:35:20.759710 kubelet[2108]: E0517 00:35:20.759640 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:20.759794 kubelet[2108]: E0517 00:35:20.759717 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:20.759928 kubelet[2108]: E0517 00:35:20.759868 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d44b8df7-v6qvx_calico-system(664372a8-af86-4567-a233-d8be21950e7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:35:20.761128 kubelet[2108]: E0517 00:35:20.761056 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-55d44b8df7-v6qvx" podUID="664372a8-af86-4567-a233-d8be21950e7b" May 17 00:35:20.855619 env[1307]: time="2025-05-17T00:35:20.855497014Z" level=info msg="StopPodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\"" May 17 00:35:20.856481 env[1307]: time="2025-05-17T00:35:20.855592934Z" level=info msg="StopPodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\"" May 17 00:35:20.857811 kubelet[2108]: I0517 00:35:20.857585 2108 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380793ab-fb3a-44a5-9234-41b018bda4aa" path="/var/lib/kubelet/pods/380793ab-fb3a-44a5-9234-41b018bda4aa/volumes" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.905 [INFO][3730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.906 [INFO][3730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" iface="eth0" netns="/var/run/netns/cni-1be8f891-5658-c0ef-0cda-468661f4f379" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.907 [INFO][3730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" iface="eth0" netns="/var/run/netns/cni-1be8f891-5658-c0ef-0cda-468661f4f379" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.907 [INFO][3730] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" iface="eth0" netns="/var/run/netns/cni-1be8f891-5658-c0ef-0cda-468661f4f379" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.907 [INFO][3730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.907 [INFO][3730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.928 [INFO][3747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.929 [INFO][3747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.929 [INFO][3747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.935 [WARNING][3747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.935 [INFO][3747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.936 [INFO][3747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:20.942419 env[1307]: 2025-05-17 00:35:20.940 [INFO][3730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:20.944698 env[1307]: time="2025-05-17T00:35:20.942644167Z" level=info msg="TearDown network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" successfully" May 17 00:35:20.944698 env[1307]: time="2025-05-17T00:35:20.942712936Z" level=info msg="StopPodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" returns successfully" May 17 00:35:20.944698 env[1307]: time="2025-05-17T00:35:20.943744425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-z62dt,Uid:b35c5827-c746-454d-b6bf-a8a0e8b71713,Namespace:calico-apiserver,Attempt:1,}" May 17 00:35:20.944889 systemd[1]: run-netns-cni\x2d1be8f891\x2d5658\x2dc0ef\x2d0cda\x2d468661f4f379.mount: Deactivated successfully. May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.906 [INFO][3735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.906 [INFO][3735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" iface="eth0" netns="/var/run/netns/cni-98dedc43-604d-7d0f-0071-1166e946e708" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.907 [INFO][3735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" iface="eth0" netns="/var/run/netns/cni-98dedc43-604d-7d0f-0071-1166e946e708" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.907 [INFO][3735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" iface="eth0" netns="/var/run/netns/cni-98dedc43-604d-7d0f-0071-1166e946e708" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.907 [INFO][3735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.907 [INFO][3735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.932 [INFO][3748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.932 [INFO][3748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.936 [INFO][3748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.942 [WARNING][3748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.942 [INFO][3748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.943 [INFO][3748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:20.948579 env[1307]: 2025-05-17 00:35:20.947 [INFO][3735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:20.950867 systemd[1]: run-netns-cni\x2d98dedc43\x2d604d\x2d7d0f\x2d0071\x2d1166e946e708.mount: Deactivated successfully. May 17 00:35:20.951506 env[1307]: time="2025-05-17T00:35:20.951450579Z" level=info msg="TearDown network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" successfully" May 17 00:35:20.951506 env[1307]: time="2025-05-17T00:35:20.951493730Z" level=info msg="StopPodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" returns successfully" May 17 00:35:20.952335 env[1307]: time="2025-05-17T00:35:20.952304994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-zf9xd,Uid:915e2165-3634-409e-af91-ef9388cac59f,Namespace:calico-system,Attempt:1,}" May 17 00:35:21.052597 systemd-networkd[1082]: cali1a6234a4a54: Link UP May 17 00:35:21.053979 systemd-networkd[1082]: cali1a6234a4a54: Gained carrier May 17 00:35:21.054513 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1a6234a4a54: link becomes ready May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:20.990 [INFO][3764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0 calico-apiserver-dd64f56db- calico-apiserver b35c5827-c746-454d-b6bf-a8a0e8b71713 993 0 2025-05-17 00:34:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd64f56db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd64f56db-z62dt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a6234a4a54 [] [] }} ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:20.990 [INFO][3764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.017 [INFO][3788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" HandleID="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.017 [INFO][3788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" HandleID="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd64f56db-z62dt", "timestamp":"2025-05-17 00:35:21.017124188 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.017 [INFO][3788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.017 [INFO][3788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.017 [INFO][3788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.023 [INFO][3788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.033 [INFO][3788] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.036 [INFO][3788] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.038 [INFO][3788] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.039 [INFO][3788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.039 [INFO][3788] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.041 [INFO][3788] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.044 [INFO][3788] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.048 [INFO][3788] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.048 [INFO][3788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" host="localhost" May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.048 [INFO][3788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.070556 env[1307]: 2025-05-17 00:35:21.048 [INFO][3788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" HandleID="k8s-pod-network.ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.071146 env[1307]: 2025-05-17 00:35:21.050 [INFO][3764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35c5827-c746-454d-b6bf-a8a0e8b71713", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd64f56db-z62dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a6234a4a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.071146 env[1307]: 2025-05-17 00:35:21.050 [INFO][3764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.071146 env[1307]: 2025-05-17 00:35:21.050 [INFO][3764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a6234a4a54 ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.071146 env[1307]: 2025-05-17 00:35:21.053 [INFO][3764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.071146 env[1307]: 2025-05-17 00:35:21.054 [INFO][3764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35c5827-c746-454d-b6bf-a8a0e8b71713", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b", Pod:"calico-apiserver-dd64f56db-z62dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a6234a4a54", MAC:"c6:f4:b2:c4:a1:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.071146 env[1307]: 2025-05-17 00:35:21.069 [INFO][3764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-z62dt" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:21.080159 env[1307]: time="2025-05-17T00:35:21.080090461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:21.080964 env[1307]: time="2025-05-17T00:35:21.080136307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:21.080964 env[1307]: time="2025-05-17T00:35:21.080146696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:21.080964 env[1307]: time="2025-05-17T00:35:21.080388641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b pid=3823 runtime=io.containerd.runc.v2 May 17 00:35:21.079000 audit[3825]: NETFILTER_CFG table=filter:103 family=2 entries=50 op=nft_register_chain pid=3825 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:21.079000 audit[3825]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffe7aa9bb00 a2=0 a3=7ffe7aa9baec items=0 ppid=3463 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:21.079000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:21.103451 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:21.129632 env[1307]: time="2025-05-17T00:35:21.127882988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-z62dt,Uid:b35c5827-c746-454d-b6bf-a8a0e8b71713,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b\"" May 17 00:35:21.130201 env[1307]: time="2025-05-17T00:35:21.130177870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:35:21.158119 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidac0a455c90: link becomes ready May 17 00:35:21.158396 systemd-networkd[1082]: calidac0a455c90: Link UP May 17 00:35:21.158509 systemd-networkd[1082]: calidac0a455c90: Gained carrier May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.002 [INFO][3776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0 goldmane-8f77d7b6c- calico-system 915e2165-3634-409e-af91-ef9388cac59f 994 0 2025-05-17 00:34:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-zf9xd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidac0a455c90 [] [] }} ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.002 [INFO][3776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.030 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" HandleID="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.030 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" HandleID="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e2180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-zf9xd", "timestamp":"2025-05-17 00:35:21.030287296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.030 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.048 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.048 [INFO][3797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.124 [INFO][3797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.134 [INFO][3797] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.138 [INFO][3797] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.140 [INFO][3797] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.142 [INFO][3797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.142 [INFO][3797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.143 [INFO][3797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80 May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.147 [INFO][3797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.152 [INFO][3797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.152 [INFO][3797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" host="localhost" May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.152 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.168343 env[1307]: 2025-05-17 00:35:21.152 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" HandleID="k8s-pod-network.c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.168956 env[1307]: 2025-05-17 00:35:21.153 [INFO][3776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"915e2165-3634-409e-af91-ef9388cac59f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-zf9xd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidac0a455c90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.168956 env[1307]: 2025-05-17 00:35:21.154 [INFO][3776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.168956 env[1307]: 2025-05-17 00:35:21.154 [INFO][3776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidac0a455c90 ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.168956 env[1307]: 2025-05-17 00:35:21.158 [INFO][3776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.168956 env[1307]: 2025-05-17 00:35:21.158 [INFO][3776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"915e2165-3634-409e-af91-ef9388cac59f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80", Pod:"goldmane-8f77d7b6c-zf9xd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidac0a455c90", MAC:"aa:6c:60:d7:68:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:21.168956 env[1307]: 2025-05-17 00:35:21.166 [INFO][3776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80" Namespace="calico-system" Pod="goldmane-8f77d7b6c-zf9xd" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:21.181000 audit[3875]: NETFILTER_CFG table=filter:104 family=2 entries=48 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:21.181000 audit[3875]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7ffc93b33500 a2=0 a3=7ffc93b334ec items=0 ppid=3463 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:21.181000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:21.183140 env[1307]: time="2025-05-17T00:35:21.182061114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:21.183140 env[1307]: time="2025-05-17T00:35:21.182117691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:21.183140 env[1307]: time="2025-05-17T00:35:21.182130094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:21.183140 env[1307]: time="2025-05-17T00:35:21.182271990Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80 pid=3877 runtime=io.containerd.runc.v2 May 17 00:35:21.201193 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:21.222628 env[1307]: time="2025-05-17T00:35:21.222097194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-zf9xd,Uid:915e2165-3634-409e-af91-ef9388cac59f,Namespace:calico-system,Attempt:1,} returns sandbox id \"c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80\"" May 17 00:35:21.405263 systemd-networkd[1082]: calie1ecf6d6611: Gained IPv6LL May 17 00:35:21.445222 kubelet[2108]: E0517 00:35:21.445184 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-55d44b8df7-v6qvx" podUID="664372a8-af86-4567-a233-d8be21950e7b" May 17 00:35:21.462000 audit[3911]: NETFILTER_CFG table=filter:105 family=2 entries=20 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:21.462000 audit[3911]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc10aeb10 a2=0 a3=7fffc10aeafc items=0 ppid=2253 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:21.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:21.467000 audit[3911]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:21.467000 audit[3911]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffc10aeb10 a2=0 a3=0 items=0 ppid=2253 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:21.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:21.855011 env[1307]: time="2025-05-17T00:35:21.854976583Z" level=info msg="StopPodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\"" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.889 [INFO][3923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.889 [INFO][3923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" iface="eth0" netns="/var/run/netns/cni-64d47257-ffbd-a020-3d7e-37f017e34629" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.889 [INFO][3923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" iface="eth0" netns="/var/run/netns/cni-64d47257-ffbd-a020-3d7e-37f017e34629" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.893 [INFO][3923] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" iface="eth0" netns="/var/run/netns/cni-64d47257-ffbd-a020-3d7e-37f017e34629" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.893 [INFO][3923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.893 [INFO][3923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.913 [INFO][3932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.913 [INFO][3932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.913 [INFO][3932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.918 [WARNING][3932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.918 [INFO][3932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.920 [INFO][3932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:21.923298 env[1307]: 2025-05-17 00:35:21.921 [INFO][3923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:21.923974 env[1307]: time="2025-05-17T00:35:21.923438112Z" level=info msg="TearDown network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" successfully" May 17 00:35:21.923974 env[1307]: time="2025-05-17T00:35:21.923480322Z" level=info msg="StopPodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" returns successfully" May 17 00:35:21.925800 systemd[1]: run-netns-cni\x2d64d47257\x2dffbd\x2da020\x2d3d7e\x2d37f017e34629.mount: Deactivated successfully. May 17 00:35:21.926157 kubelet[2108]: E0517 00:35:21.926121 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:21.927169 env[1307]: time="2025-05-17T00:35:21.927130762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h6snv,Uid:156114d2-bfb2-42a0-a77e-b4eed0e196ef,Namespace:kube-system,Attempt:1,}" May 17 00:35:22.020278 systemd-networkd[1082]: cali8337b6fea0b: Link UP May 17 00:35:22.022732 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:35:22.022826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8337b6fea0b: link becomes ready May 17 00:35:22.022934 systemd-networkd[1082]: cali8337b6fea0b: Gained carrier May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.967 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0 coredns-7c65d6cfc9- kube-system 156114d2-bfb2-42a0-a77e-b4eed0e196ef 1017 0 2025-05-17 00:34:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-h6snv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8337b6fea0b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.967 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.989 [INFO][3955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" HandleID="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.989 [INFO][3955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" HandleID="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-h6snv", "timestamp":"2025-05-17 00:35:21.989315546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.989 [INFO][3955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.989 [INFO][3955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.989 [INFO][3955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.995 [INFO][3955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:21.999 [INFO][3955] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.002 [INFO][3955] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.004 [INFO][3955] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.006 [INFO][3955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.006 [INFO][3955] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.007 [INFO][3955] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5 May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.012 [INFO][3955] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.016 [INFO][3955] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.016 [INFO][3955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" host="localhost" May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.016 [INFO][3955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.035917 env[1307]: 2025-05-17 00:35:22.016 [INFO][3955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" HandleID="k8s-pod-network.0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.036500 env[1307]: 2025-05-17 00:35:22.018 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"156114d2-bfb2-42a0-a77e-b4eed0e196ef", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-h6snv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8337b6fea0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.036500 env[1307]: 2025-05-17 00:35:22.018 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.036500 env[1307]: 2025-05-17 00:35:22.018 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8337b6fea0b ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.036500 env[1307]: 2025-05-17 00:35:22.023 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.036500 env[1307]: 2025-05-17 00:35:22.023 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"156114d2-bfb2-42a0-a77e-b4eed0e196ef", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5", Pod:"coredns-7c65d6cfc9-h6snv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8337b6fea0b", MAC:"36:41:01:b3:6c:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:22.036500 env[1307]: 2025-05-17 00:35:22.033 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h6snv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:22.042000 audit[3974]: NETFILTER_CFG table=filter:107 family=2 entries=50 op=nft_register_chain pid=3974 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:22.042000 audit[3974]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffc9fe77820 a2=0 a3=7ffc9fe7780c items=0 ppid=3463 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:22.042000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:22.047478 env[1307]: time="2025-05-17T00:35:22.047410134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:22.047554 env[1307]: time="2025-05-17T00:35:22.047491077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:22.047554 env[1307]: time="2025-05-17T00:35:22.047513219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:22.047779 env[1307]: time="2025-05-17T00:35:22.047731348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5 pid=3981 runtime=io.containerd.runc.v2 May 17 00:35:22.070388 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:22.091951 env[1307]: time="2025-05-17T00:35:22.091908432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h6snv,Uid:156114d2-bfb2-42a0-a77e-b4eed0e196ef,Namespace:kube-system,Attempt:1,} returns sandbox id \"0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5\"" May 17 00:35:22.092542 kubelet[2108]: E0517 00:35:22.092521 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:22.096250 env[1307]: time="2025-05-17T00:35:22.095983740Z" level=info msg="CreateContainer within sandbox \"0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:35:22.112224 env[1307]: time="2025-05-17T00:35:22.112127136Z" level=info msg="CreateContainer within sandbox \"0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0e0502c2a060896cd326f2bca89d01cd354a14723b014be51a5680149387fb4\"" May 17 00:35:22.113416 env[1307]: time="2025-05-17T00:35:22.113365543Z" level=info msg="StartContainer for \"e0e0502c2a060896cd326f2bca89d01cd354a14723b014be51a5680149387fb4\"" May 17 00:35:22.152166 env[1307]: time="2025-05-17T00:35:22.152124516Z" level=info msg="StartContainer for \"e0e0502c2a060896cd326f2bca89d01cd354a14723b014be51a5680149387fb4\" returns successfully" May 17 00:35:22.173230 systemd-networkd[1082]: vxlan.calico: Gained IPv6LL May 17 00:35:22.429209 systemd-networkd[1082]: calidac0a455c90: Gained IPv6LL May 17 00:35:22.450542 kubelet[2108]: E0517 00:35:22.450138 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:22.456670 kubelet[2108]: I0517 00:35:22.456643 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:22.468490 kubelet[2108]: I0517 00:35:22.468190 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h6snv" podStartSLOduration=35.468169542 podStartE2EDuration="35.468169542s" podCreationTimestamp="2025-05-17 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:22.467832068 +0000 UTC m=+39.685586171" watchObservedRunningTime="2025-05-17 00:35:22.468169542 +0000 UTC m=+39.685923665" May 17 00:35:22.493000 audit[4072]: NETFILTER_CFG table=filter:108 family=2 entries=20 op=nft_register_rule pid=4072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:22.493000 audit[4072]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdeb2f1630 a2=0 a3=7ffdeb2f161c items=0 ppid=2253 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:22.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:22.500000 audit[4072]: NETFILTER_CFG table=nat:109 family=2 entries=14 op=nft_register_rule pid=4072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:22.500000 audit[4072]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdeb2f1630 a2=0 a3=0 items=0 ppid=2253 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:22.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:22.522000 audit[4079]: NETFILTER_CFG table=filter:110 family=2 entries=17 op=nft_register_rule pid=4079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:22.522000 audit[4079]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed4f80040 a2=0 a3=7ffed4f8002c items=0 ppid=2253 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:22.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:22.530000 audit[4079]: NETFILTER_CFG table=nat:111 family=2 entries=35 op=nft_register_chain pid=4079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:22.530000 audit[4079]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffed4f80040 a2=0 a3=7ffed4f8002c items=0 ppid=2253 pid=4079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:22.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:22.855737 env[1307]: time="2025-05-17T00:35:22.855667857Z" level=info msg="StopPodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\"" May 17 00:35:22.877318 systemd-networkd[1082]: cali1a6234a4a54: Gained IPv6LL May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.892 [INFO][4113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.892 [INFO][4113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" iface="eth0" netns="/var/run/netns/cni-afdad05d-cbec-fced-b6f3-ef3e4d861dc7" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.893 [INFO][4113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" iface="eth0" netns="/var/run/netns/cni-afdad05d-cbec-fced-b6f3-ef3e4d861dc7" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.893 [INFO][4113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" iface="eth0" netns="/var/run/netns/cni-afdad05d-cbec-fced-b6f3-ef3e4d861dc7" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.893 [INFO][4113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.894 [INFO][4113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.910 [INFO][4123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.910 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.910 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.916 [WARNING][4123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.916 [INFO][4123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.917 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:22.919638 env[1307]: 2025-05-17 00:35:22.918 [INFO][4113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:22.920087 env[1307]: time="2025-05-17T00:35:22.919911078Z" level=info msg="TearDown network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" successfully" May 17 00:35:22.920087 env[1307]: time="2025-05-17T00:35:22.919962295Z" level=info msg="StopPodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" returns successfully" May 17 00:35:22.922495 env[1307]: time="2025-05-17T00:35:22.922464286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8db7c4fcb-w875d,Uid:da68fa8b-b750-48ef-8ed0-edd244e098a4,Namespace:calico-system,Attempt:1,}" May 17 00:35:22.922505 systemd[1]: run-netns-cni\x2dafdad05d\x2dcbec\x2dfced\x2db6f3\x2def3e4d861dc7.mount: Deactivated successfully. May 17 00:35:23.017873 systemd-networkd[1082]: cali84c4cc541eb: Link UP May 17 00:35:23.020090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali84c4cc541eb: link becomes ready May 17 00:35:23.026557 systemd-networkd[1082]: cali84c4cc541eb: Gained carrier May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.965 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0 calico-kube-controllers-8db7c4fcb- calico-system da68fa8b-b750-48ef-8ed0-edd244e098a4 1045 0 2025-05-17 00:34:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8db7c4fcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8db7c4fcb-w875d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali84c4cc541eb [] [] }} ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.965 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.986 [INFO][4146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" HandleID="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.986 [INFO][4146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" HandleID="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000493140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8db7c4fcb-w875d", "timestamp":"2025-05-17 00:35:22.986353642 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.986 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.986 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.986 [INFO][4146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.992 [INFO][4146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.996 [INFO][4146] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:22.999 [INFO][4146] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.000 [INFO][4146] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.002 [INFO][4146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.002 [INFO][4146] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.003 [INFO][4146] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.008 [INFO][4146] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.014 [INFO][4146] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.014 [INFO][4146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" host="localhost" May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.014 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:23.034966 env[1307]: 2025-05-17 00:35:23.014 [INFO][4146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" HandleID="k8s-pod-network.438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.036032 env[1307]: 2025-05-17 00:35:23.016 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0", GenerateName:"calico-kube-controllers-8db7c4fcb-", Namespace:"calico-system", SelfLink:"", UID:"da68fa8b-b750-48ef-8ed0-edd244e098a4", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8db7c4fcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8db7c4fcb-w875d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84c4cc541eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:23.036032 env[1307]: 2025-05-17 00:35:23.016 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.036032 env[1307]: 2025-05-17 00:35:23.016 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84c4cc541eb ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.036032 env[1307]: 2025-05-17 00:35:23.019 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.036032 env[1307]: 2025-05-17 00:35:23.020 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0", GenerateName:"calico-kube-controllers-8db7c4fcb-", Namespace:"calico-system", SelfLink:"", UID:"da68fa8b-b750-48ef-8ed0-edd244e098a4", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8db7c4fcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f", Pod:"calico-kube-controllers-8db7c4fcb-w875d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84c4cc541eb", MAC:"1a:4d:ac:b7:33:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:23.036032 env[1307]: 2025-05-17 00:35:23.031 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f" Namespace="calico-system" Pod="calico-kube-controllers-8db7c4fcb-w875d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:23.046000 audit[4169]: NETFILTER_CFG table=filter:112 family=2 entries=54 op=nft_register_chain pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:23.046000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=25992 a0=3 a1=7ffdd33c2eb0 a2=0 a3=7ffdd33c2e9c items=0 ppid=3463 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:23.046000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:23.055087 env[1307]: time="2025-05-17T00:35:23.055013673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:23.055164 env[1307]: time="2025-05-17T00:35:23.055094114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:23.055164 env[1307]: time="2025-05-17T00:35:23.055124461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:23.055289 env[1307]: time="2025-05-17T00:35:23.055249736Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f pid=4172 runtime=io.containerd.runc.v2 May 17 00:35:23.075398 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:23.099359 env[1307]: time="2025-05-17T00:35:23.099311416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8db7c4fcb-w875d,Uid:da68fa8b-b750-48ef-8ed0-edd244e098a4,Namespace:calico-system,Attempt:1,} returns sandbox id \"438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f\"" May 17 00:35:23.389549 systemd-networkd[1082]: cali8337b6fea0b: Gained IPv6LL May 17 00:35:23.453404 kubelet[2108]: E0517 00:35:23.453373 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:23.855785 env[1307]: time="2025-05-17T00:35:23.855715313Z" level=info msg="StopPodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\"" May 17 00:35:24.080503 kernel: kauditd_printk_skb: 554 callbacks suppressed May 17 00:35:24.080640 kernel: audit: type=1130 audit(1747442124.073:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:43396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:24.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:43396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:24.074103 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:43396.service. May 17 00:35:24.081045 env[1307]: time="2025-05-17T00:35:24.077331647Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:24.081045 env[1307]: time="2025-05-17T00:35:24.080001372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:24.082377 env[1307]: time="2025-05-17T00:35:24.082315060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:24.084234 env[1307]: time="2025-05-17T00:35:24.084201153Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:24.084745 env[1307]: time="2025-05-17T00:35:24.084686755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:35:24.088575 env[1307]: time="2025-05-17T00:35:24.088532951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:35:24.089559 env[1307]: time="2025-05-17T00:35:24.089509717Z" level=info msg="CreateContainer within sandbox \"ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.061 [INFO][4218] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.061 [INFO][4218] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" iface="eth0" netns="/var/run/netns/cni-9a7dc95b-0a60-bd5c-4c46-8c0e191ec4ef" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.061 [INFO][4218] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" iface="eth0" netns="/var/run/netns/cni-9a7dc95b-0a60-bd5c-4c46-8c0e191ec4ef" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.061 [INFO][4218] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" iface="eth0" netns="/var/run/netns/cni-9a7dc95b-0a60-bd5c-4c46-8c0e191ec4ef" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.061 [INFO][4218] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.061 [INFO][4218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.087 [INFO][4227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.087 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.087 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.092 [WARNING][4227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.092 [INFO][4227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.093 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:24.098095 env[1307]: 2025-05-17 00:35:24.096 [INFO][4218] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:24.100476 env[1307]: time="2025-05-17T00:35:24.098228277Z" level=info msg="TearDown network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" successfully" May 17 00:35:24.100476 env[1307]: time="2025-05-17T00:35:24.098256169Z" level=info msg="StopPodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" returns successfully" May 17 00:35:24.100476 env[1307]: time="2025-05-17T00:35:24.098972245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-gn2th,Uid:ffd190ac-644b-4bbf-bd2f-feed5f4c93a6,Namespace:calico-apiserver,Attempt:1,}" May 17 00:35:24.100864 systemd[1]: run-netns-cni\x2d9a7dc95b\x2d0a60\x2dbd5c\x2d4c46\x2d8c0e191ec4ef.mount: Deactivated successfully. May 17 00:35:24.111213 env[1307]: time="2025-05-17T00:35:24.111138102Z" level=info msg="CreateContainer within sandbox \"ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1932de1ff72f9463805ceedd072e2d1559b59a0fd38e6d346580d314098e1f1f\"" May 17 00:35:24.111778 env[1307]: time="2025-05-17T00:35:24.111739201Z" level=info msg="StartContainer for \"1932de1ff72f9463805ceedd072e2d1559b59a0fd38e6d346580d314098e1f1f\"" May 17 00:35:24.113000 audit[4233]: USER_ACCT pid=4233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.119662 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 43396 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:24.123536 kernel: audit: type=1101 audit(1747442124.113:420): pid=4233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.138389 kernel: audit: type=1103 audit(1747442124.123:421): pid=4233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.138580 kernel: audit: type=1006 audit(1747442124.123:422): pid=4233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 17 00:35:24.138617 kernel: audit: type=1300 audit(1747442124.123:422): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7495f580 a2=3 a3=0 items=0 ppid=1 pid=4233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.123000 audit[4233]: CRED_ACQ pid=4233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.123000 audit[4233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7495f580 a2=3 a3=0 items=0 ppid=1 pid=4233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.130444 systemd-logind[1292]: New session 9 of user core. May 17 00:35:24.126152 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:24.132837 systemd[1]: Started session-9.scope. May 17 00:35:24.123000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:24.142114 kernel: audit: type=1327 audit(1747442124.123:422): proctitle=737368643A20636F7265205B707269765D May 17 00:35:24.141000 audit[4233]: USER_START pid=4233 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.153053 kernel: audit: type=1105 audit(1747442124.141:423): pid=4233 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.153263 kernel: audit: type=1103 audit(1747442124.142:424): pid=4268 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.142000 audit[4268]: CRED_ACQ pid=4268 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.206012 env[1307]: time="2025-05-17T00:35:24.205954853Z" level=info msg="StartContainer for \"1932de1ff72f9463805ceedd072e2d1559b59a0fd38e6d346580d314098e1f1f\" returns successfully" May 17 00:35:24.255015 systemd-networkd[1082]: cali1070e847677: Link UP May 17 00:35:24.258784 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:35:24.258832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1070e847677: link becomes ready May 17 00:35:24.258627 systemd-networkd[1082]: cali1070e847677: Gained carrier May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.173 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0 calico-apiserver-dd64f56db- calico-apiserver ffd190ac-644b-4bbf-bd2f-feed5f4c93a6 1057 0 2025-05-17 00:34:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd64f56db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dd64f56db-gn2th eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1070e847677 [] [] }} ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.173 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.215 [INFO][4278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" HandleID="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.217 [INFO][4278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" HandleID="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dd64f56db-gn2th", "timestamp":"2025-05-17 00:35:24.215389509 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.217 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.217 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.217 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.223 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.227 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.230 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.232 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.236 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.236 [INFO][4278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.237 [INFO][4278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789 May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.241 [INFO][4278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.249 [INFO][4278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.249 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" host="localhost" May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.250 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:24.269085 env[1307]: 2025-05-17 00:35:24.250 [INFO][4278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" HandleID="k8s-pod-network.df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.269648 env[1307]: 2025-05-17 00:35:24.252 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dd64f56db-gn2th", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1070e847677", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:24.269648 env[1307]: 2025-05-17 00:35:24.252 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.269648 env[1307]: 2025-05-17 00:35:24.252 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1070e847677 ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.269648 env[1307]: 2025-05-17 00:35:24.259 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.269648 env[1307]: 2025-05-17 00:35:24.259 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789", Pod:"calico-apiserver-dd64f56db-gn2th", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1070e847677", MAC:"02:90:4f:29:da:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:24.269648 env[1307]: 2025-05-17 00:35:24.267 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789" Namespace="calico-apiserver" Pod="calico-apiserver-dd64f56db-gn2th" WorkloadEndpoint="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:24.279868 env[1307]: time="2025-05-17T00:35:24.279809731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:24.280084 env[1307]: time="2025-05-17T00:35:24.280042769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:24.280190 env[1307]: time="2025-05-17T00:35:24.280167923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:24.280487 env[1307]: time="2025-05-17T00:35:24.280423554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789 pid=4325 runtime=io.containerd.runc.v2 May 17 00:35:24.292389 kernel: audit: type=1325 audit(1747442124.283:425): table=filter:113 family=2 entries=55 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:24.292509 kernel: audit: type=1300 audit(1747442124.283:425): arch=c000003e syscall=46 success=yes exit=28288 a0=3 a1=7ffcf0a15070 a2=0 a3=7ffcf0a1505c items=0 ppid=3463 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.283000 audit[4335]: NETFILTER_CFG table=filter:113 family=2 entries=55 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:24.283000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=28288 a0=3 a1=7ffcf0a15070 a2=0 a3=7ffcf0a1505c items=0 ppid=3463 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.283000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:24.286000 audit[4233]: USER_END pid=4233 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.286000 audit[4233]: CRED_DISP pid=4233 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:24.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:43396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:24.288522 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:43396.service: Deactivated successfully. May 17 00:35:24.286417 sshd[4233]: pam_unix(sshd:session): session closed for user core May 17 00:35:24.289245 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:35:24.290271 systemd-logind[1292]: Session 9 logged out. Waiting for processes to exit. May 17 00:35:24.291088 systemd-logind[1292]: Removed session 9. May 17 00:35:24.305778 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:24.328252 env[1307]: time="2025-05-17T00:35:24.328195748Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:35:24.330565 env[1307]: time="2025-05-17T00:35:24.330530214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:35:24.331193 kubelet[2108]: E0517 00:35:24.330697 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:24.331193 kubelet[2108]: E0517 00:35:24.330747 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:24.331193 kubelet[2108]: E0517 00:35:24.330951 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckmv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-zf9xd_calico-system(915e2165-3634-409e-af91-ef9388cac59f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:35:24.331729 env[1307]: time="2025-05-17T00:35:24.331709380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:35:24.332945 kubelet[2108]: E0517 00:35:24.332897 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:35:24.335027 env[1307]: time="2025-05-17T00:35:24.335005833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd64f56db-gn2th,Uid:ffd190ac-644b-4bbf-bd2f-feed5f4c93a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789\"" May 17 00:35:24.337171 env[1307]: time="2025-05-17T00:35:24.337000921Z" level=info msg="CreateContainer within sandbox \"df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:35:24.350020 env[1307]: time="2025-05-17T00:35:24.349995075Z" level=info msg="CreateContainer within sandbox \"df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3ea3758d2893fe94997f7453cad1d86720625309494ede7ce0d37c1a1d2e4543\"" May 17 00:35:24.350555 env[1307]: time="2025-05-17T00:35:24.350538506Z" level=info msg="StartContainer for \"3ea3758d2893fe94997f7453cad1d86720625309494ede7ce0d37c1a1d2e4543\"" May 17 00:35:24.406725 env[1307]: time="2025-05-17T00:35:24.405026026Z" level=info msg="StartContainer for \"3ea3758d2893fe94997f7453cad1d86720625309494ede7ce0d37c1a1d2e4543\" returns successfully" May 17 00:35:24.461996 kubelet[2108]: E0517 00:35:24.459897 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:24.462608 kubelet[2108]: E0517 00:35:24.462578 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:35:24.473303 kubelet[2108]: I0517 00:35:24.469906 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd64f56db-z62dt" podStartSLOduration=26.513678749 podStartE2EDuration="29.469891345s" podCreationTimestamp="2025-05-17 00:34:55 +0000 UTC" firstStartedPulling="2025-05-17 00:35:21.129664205 +0000 UTC m=+38.347418318" lastFinishedPulling="2025-05-17 00:35:24.085876801 +0000 UTC m=+41.303630914" observedRunningTime="2025-05-17 00:35:24.467996816 +0000 UTC m=+41.685750949" watchObservedRunningTime="2025-05-17 00:35:24.469891345 +0000 UTC m=+41.687645458" May 17 00:35:24.475000 audit[4402]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:24.475000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd49fb3d30 a2=0 a3=7ffd49fb3d1c items=0 ppid=2253 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:24.481000 audit[4402]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:24.481000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd49fb3d30 a2=0 a3=7ffd49fb3d1c items=0 ppid=2253 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:24.500000 audit[4404]: NETFILTER_CFG table=filter:116 family=2 entries=14 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:24.500000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffcd0b0d20 a2=0 a3=7fffcd0b0d0c items=0 ppid=2253 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:24.507000 audit[4404]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:24.507000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcd0b0d20 a2=0 a3=7fffcd0b0d0c items=0 ppid=2253 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:24.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:24.857332 env[1307]: time="2025-05-17T00:35:24.857248950Z" level=info msg="StopPodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\"" May 17 00:35:24.857696 env[1307]: time="2025-05-17T00:35:24.857281892Z" level=info msg="StopPodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\"" May 17 00:35:24.925340 systemd-networkd[1082]: cali84c4cc541eb: Gained IPv6LL May 17 00:35:24.967689 kubelet[2108]: I0517 00:35:24.956997 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd64f56db-gn2th" podStartSLOduration=29.95697795 podStartE2EDuration="29.95697795s" podCreationTimestamp="2025-05-17 00:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:24.501136238 +0000 UTC m=+41.718890351" watchObservedRunningTime="2025-05-17 00:35:24.95697795 +0000 UTC m=+42.174732053" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:24.968 [INFO][4428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:24.968 [INFO][4428] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" iface="eth0" netns="/var/run/netns/cni-1619ca85-83a5-b83d-5f04-778739b6a368" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:24.971 [INFO][4428] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" iface="eth0" netns="/var/run/netns/cni-1619ca85-83a5-b83d-5f04-778739b6a368" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:24.971 [INFO][4428] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" iface="eth0" netns="/var/run/netns/cni-1619ca85-83a5-b83d-5f04-778739b6a368" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:24.971 [INFO][4428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:24.971 [INFO][4428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.000 [INFO][4443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.000 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.001 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.006 [WARNING][4443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.006 [INFO][4443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.007 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:25.010410 env[1307]: 2025-05-17 00:35:25.008 [INFO][4428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:25.012523 env[1307]: time="2025-05-17T00:35:25.010582411Z" level=info msg="TearDown network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" successfully" May 17 00:35:25.012523 env[1307]: time="2025-05-17T00:35:25.010619089Z" level=info msg="StopPodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" returns successfully" May 17 00:35:25.012523 env[1307]: time="2025-05-17T00:35:25.011304437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gb94f,Uid:7a10bef1-407b-40ca-9b52-a14544f402bf,Namespace:calico-system,Attempt:1,}" May 17 00:35:25.013108 systemd[1]: run-netns-cni\x2d1619ca85\x2d83a5\x2db83d\x2d5f04\x2d778739b6a368.mount: Deactivated successfully. May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:24.993 [INFO][4429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:24.993 [INFO][4429] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" iface="eth0" netns="/var/run/netns/cni-bfb19c53-81f9-7edf-6579-64e365e1f54e" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:24.994 [INFO][4429] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" iface="eth0" netns="/var/run/netns/cni-bfb19c53-81f9-7edf-6579-64e365e1f54e" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:24.994 [INFO][4429] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" iface="eth0" netns="/var/run/netns/cni-bfb19c53-81f9-7edf-6579-64e365e1f54e" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:24.994 [INFO][4429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:24.995 [INFO][4429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.020 [INFO][4450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.020 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.020 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.026 [WARNING][4450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.027 [INFO][4450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.028 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:25.033780 env[1307]: 2025-05-17 00:35:25.032 [INFO][4429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:25.034242 env[1307]: time="2025-05-17T00:35:25.033916167Z" level=info msg="TearDown network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" successfully" May 17 00:35:25.034242 env[1307]: time="2025-05-17T00:35:25.033952876Z" level=info msg="StopPodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" returns successfully" May 17 00:35:25.034295 kubelet[2108]: E0517 00:35:25.034279 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:25.034965 env[1307]: time="2025-05-17T00:35:25.034938347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p882x,Uid:aecaa202-f800-4402-b8be-d457733677a8,Namespace:kube-system,Attempt:1,}" May 17 00:35:25.036631 systemd[1]: run-netns-cni\x2dbfb19c53\x2d81f9\x2d7edf\x2d6579\x2d64e365e1f54e.mount: Deactivated successfully. May 17 00:35:25.199349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali80c1e569d11: link becomes ready May 17 00:35:25.197383 systemd-networkd[1082]: cali80c1e569d11: Link UP May 17 00:35:25.198490 systemd-networkd[1082]: cali80c1e569d11: Gained carrier May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.122 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gb94f-eth0 csi-node-driver- calico-system 7a10bef1-407b-40ca-9b52-a14544f402bf 1092 0 2025-05-17 00:34:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gb94f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali80c1e569d11 [] [] }} ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.122 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.159 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" HandleID="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.159 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" HandleID="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gb94f", "timestamp":"2025-05-17 00:35:25.159799697 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.159 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.160 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.160 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.168 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.172 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.177 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.179 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.182 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.182 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.183 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.187 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.192 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.192 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" host="localhost" May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.192 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:25.211569 env[1307]: 2025-05-17 00:35:25.192 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" HandleID="k8s-pod-network.7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.212481 env[1307]: 2025-05-17 00:35:25.194 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gb94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a10bef1-407b-40ca-9b52-a14544f402bf", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gb94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80c1e569d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:25.212481 env[1307]: 2025-05-17 00:35:25.194 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.212481 env[1307]: 2025-05-17 00:35:25.194 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80c1e569d11 ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.212481 env[1307]: 2025-05-17 00:35:25.198 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.212481 env[1307]: 2025-05-17 00:35:25.198 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gb94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a10bef1-407b-40ca-9b52-a14544f402bf", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f", Pod:"csi-node-driver-gb94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80c1e569d11", MAC:"7e:b6:00:62:78:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:25.212481 env[1307]: 2025-05-17 00:35:25.209 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f" Namespace="calico-system" Pod="csi-node-driver-gb94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:25.220712 env[1307]: time="2025-05-17T00:35:25.220638983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:25.220824 env[1307]: time="2025-05-17T00:35:25.220718472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:25.220824 env[1307]: time="2025-05-17T00:35:25.220740453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:25.221041 env[1307]: time="2025-05-17T00:35:25.220954284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f pid=4523 runtime=io.containerd.runc.v2 May 17 00:35:25.226000 audit[4535]: NETFILTER_CFG table=filter:118 family=2 entries=40 op=nft_register_chain pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:25.226000 audit[4535]: SYSCALL arch=c000003e syscall=46 success=yes exit=20784 a0=3 a1=7ffe570d64d0 a2=0 a3=7ffe570d64bc items=0 ppid=3463 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:25.226000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:25.251675 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:25.262344 env[1307]: time="2025-05-17T00:35:25.262299688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gb94f,Uid:7a10bef1-407b-40ca-9b52-a14544f402bf,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f\"" May 17 00:35:25.297408 systemd-networkd[1082]: cali0e98e5b5718: Link UP May 17 00:35:25.300307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:35:25.300416 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0e98e5b5718: link becomes ready May 17 00:35:25.300559 systemd-networkd[1082]: cali0e98e5b5718: Gained carrier May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.135 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--p882x-eth0 coredns-7c65d6cfc9- kube-system aecaa202-f800-4402-b8be-d457733677a8 1093 0 2025-05-17 00:34:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-p882x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e98e5b5718 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.135 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.178 [INFO][4498] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" HandleID="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.178 [INFO][4498] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" HandleID="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-p882x", "timestamp":"2025-05-17 00:35:25.178417924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.178 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.192 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.192 [INFO][4498] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.266 [INFO][4498] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.272 [INFO][4498] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.278 [INFO][4498] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.279 [INFO][4498] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.281 [INFO][4498] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.281 [INFO][4498] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.282 [INFO][4498] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.285 [INFO][4498] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.292 [INFO][4498] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.292 [INFO][4498] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" host="localhost" May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.292 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:25.312703 env[1307]: 2025-05-17 00:35:25.292 [INFO][4498] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" HandleID="k8s-pod-network.8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.313828 env[1307]: 2025-05-17 00:35:25.294 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p882x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aecaa202-f800-4402-b8be-d457733677a8", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-p882x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e98e5b5718", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:25.313828 env[1307]: 2025-05-17 00:35:25.294 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.313828 env[1307]: 2025-05-17 00:35:25.294 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e98e5b5718 ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.313828 env[1307]: 2025-05-17 00:35:25.300 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.313828 env[1307]: 2025-05-17 00:35:25.300 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p882x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aecaa202-f800-4402-b8be-d457733677a8", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f", Pod:"coredns-7c65d6cfc9-p882x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e98e5b5718", MAC:"aa:69:ec:9e:a1:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:25.313828 env[1307]: 2025-05-17 00:35:25.310 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p882x" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:25.324106 env[1307]: time="2025-05-17T00:35:25.324030919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:35:25.324295 env[1307]: time="2025-05-17T00:35:25.324084510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:35:25.324295 env[1307]: time="2025-05-17T00:35:25.324094839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:35:25.324295 env[1307]: time="2025-05-17T00:35:25.324256774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f pid=4577 runtime=io.containerd.runc.v2 May 17 00:35:25.332000 audit[4593]: NETFILTER_CFG table=filter:119 family=2 entries=44 op=nft_register_chain pid=4593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:35:25.332000 audit[4593]: SYSCALL arch=c000003e syscall=46 success=yes exit=21500 a0=3 a1=7ffd423eae60 a2=0 a3=7ffd423eae4c items=0 ppid=3463 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:25.332000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:35:25.345669 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:35:25.374588 env[1307]: time="2025-05-17T00:35:25.374541791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p882x,Uid:aecaa202-f800-4402-b8be-d457733677a8,Namespace:kube-system,Attempt:1,} returns sandbox id \"8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f\"" May 17 00:35:25.375534 kubelet[2108]: E0517 00:35:25.375507 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:25.377204 env[1307]: time="2025-05-17T00:35:25.377171802Z" level=info msg="CreateContainer within sandbox \"8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:35:25.402870 env[1307]: time="2025-05-17T00:35:25.402797684Z" level=info msg="CreateContainer within sandbox \"8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10e45ad401f703ef25751c61b782a0b8de03a5629048b4001985ba159666ebdb\"" May 17 00:35:25.403886 env[1307]: time="2025-05-17T00:35:25.403855602Z" level=info msg="StartContainer for \"10e45ad401f703ef25751c61b782a0b8de03a5629048b4001985ba159666ebdb\"" May 17 00:35:25.449606 env[1307]: time="2025-05-17T00:35:25.449510952Z" level=info msg="StartContainer for \"10e45ad401f703ef25751c61b782a0b8de03a5629048b4001985ba159666ebdb\" returns successfully" May 17 00:35:25.464655 kubelet[2108]: I0517 00:35:25.463370 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:25.464655 kubelet[2108]: E0517 00:35:25.463848 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:25.464655 kubelet[2108]: I0517 00:35:25.464116 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:25.492000 audit[4650]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:25.492000 audit[4650]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc87553db0 a2=0 a3=7ffc87553d9c items=0 ppid=2253 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:25.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:25.498000 audit[4650]: NETFILTER_CFG table=nat:121 family=2 entries=44 op=nft_register_rule pid=4650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:25.498000 audit[4650]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc87553db0 a2=0 a3=7ffc87553d9c items=0 ppid=2253 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:25.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:25.693337 systemd-networkd[1082]: cali1070e847677: Gained IPv6LL May 17 00:35:26.464921 kubelet[2108]: E0517 00:35:26.464887 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:26.474606 kubelet[2108]: I0517 00:35:26.474564 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p882x" podStartSLOduration=39.474547526 podStartE2EDuration="39.474547526s" podCreationTimestamp="2025-05-17 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:35:25.476021446 +0000 UTC m=+42.693775559" watchObservedRunningTime="2025-05-17 00:35:26.474547526 +0000 UTC m=+43.692301629" May 17 00:35:26.484000 audit[4658]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:26.484000 audit[4658]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1fc2dd10 a2=0 a3=7ffc1fc2dcfc items=0 ppid=2253 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:26.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:26.494000 audit[4658]: NETFILTER_CFG table=nat:123 family=2 entries=56 op=nft_register_chain pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:26.494000 audit[4658]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc1fc2dd10 a2=0 a3=7ffc1fc2dcfc items=0 ppid=2253 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:26.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:27.101244 systemd-networkd[1082]: cali0e98e5b5718: Gained IPv6LL May 17 00:35:27.175282 env[1307]: time="2025-05-17T00:35:27.175231143Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:27.177010 env[1307]: time="2025-05-17T00:35:27.176987954Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:27.178369 env[1307]: time="2025-05-17T00:35:27.178344922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:27.179532 env[1307]: time="2025-05-17T00:35:27.179509850Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:27.179940 env[1307]: time="2025-05-17T00:35:27.179902768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:35:27.180966 env[1307]: time="2025-05-17T00:35:27.180945166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:35:27.188112 env[1307]: time="2025-05-17T00:35:27.187621727Z" level=info msg="CreateContainer within sandbox \"438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:35:27.201012 env[1307]: time="2025-05-17T00:35:27.200975250Z" level=info msg="CreateContainer within sandbox \"438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6ecc41d797f3f2652720b5a6fc5cb7ed1917e51f760a28b40cfe89f5cee524e5\"" May 17 00:35:27.201375 env[1307]: time="2025-05-17T00:35:27.201355134Z" level=info msg="StartContainer for \"6ecc41d797f3f2652720b5a6fc5cb7ed1917e51f760a28b40cfe89f5cee524e5\"" May 17 00:35:27.229357 systemd-networkd[1082]: cali80c1e569d11: Gained IPv6LL May 17 00:35:27.287518 env[1307]: time="2025-05-17T00:35:27.287441316Z" level=info msg="StartContainer for \"6ecc41d797f3f2652720b5a6fc5cb7ed1917e51f760a28b40cfe89f5cee524e5\" returns successfully" May 17 00:35:27.468526 kubelet[2108]: E0517 00:35:27.468419 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:27.584719 kubelet[2108]: I0517 00:35:27.584654 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8db7c4fcb-w875d" podStartSLOduration=25.504742959 podStartE2EDuration="29.584637492s" podCreationTimestamp="2025-05-17 00:34:58 +0000 UTC" firstStartedPulling="2025-05-17 00:35:23.100722788 +0000 UTC m=+40.318476901" lastFinishedPulling="2025-05-17 00:35:27.180617321 +0000 UTC m=+44.398371434" observedRunningTime="2025-05-17 00:35:27.584335665 +0000 UTC m=+44.802089778" watchObservedRunningTime="2025-05-17 00:35:27.584637492 +0000 UTC m=+44.802391605" May 17 00:35:28.471087 kubelet[2108]: E0517 00:35:28.470670 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:29.287465 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:55028.service. May 17 00:35:29.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:55028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:29.289550 kernel: kauditd_printk_skb: 34 callbacks suppressed May 17 00:35:29.289624 kernel: audit: type=1130 audit(1747442129.286:439): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:55028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:29.328000 audit[4729]: USER_ACCT pid=4729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.344475 sshd[4729]: Accepted publickey for core from 10.0.0.1 port 55028 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:29.343000 audit[4729]: CRED_ACQ pid=4729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.345153 kernel: audit: type=1101 audit(1747442129.328:440): pid=4729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.345196 kernel: audit: type=1103 audit(1747442129.343:441): pid=4729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.345587 sshd[4729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:29.354392 systemd-logind[1292]: New session 10 of user core. May 17 00:35:29.360245 kernel: audit: type=1006 audit(1747442129.344:442): pid=4729 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 17 00:35:29.355499 systemd[1]: Started session-10.scope. May 17 00:35:29.344000 audit[4729]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7f977020 a2=3 a3=0 items=0 ppid=1 pid=4729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:29.367914 kernel: audit: type=1300 audit(1747442129.344:442): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7f977020 a2=3 a3=0 items=0 ppid=1 pid=4729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:29.367968 kernel: audit: type=1327 audit(1747442129.344:442): proctitle=737368643A20636F7265205B707269765D May 17 00:35:29.344000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:29.363000 audit[4729]: USER_START pid=4729 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.374310 kernel: audit: type=1105 audit(1747442129.363:443): pid=4729 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.374366 kernel: audit: type=1103 audit(1747442129.364:444): pid=4732 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.364000 audit[4732]: CRED_ACQ pid=4732 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.546393 sshd[4729]: pam_unix(sshd:session): session closed for user core May 17 00:35:29.557232 kernel: audit: type=1106 audit(1747442129.546:445): pid=4729 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.557380 kernel: audit: type=1104 audit(1747442129.546:446): pid=4729 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.546000 audit[4729]: USER_END pid=4729 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.546000 audit[4729]: CRED_DISP pid=4729 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:29.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:55028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:29.548809 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:55028.service: Deactivated successfully. May 17 00:35:29.549751 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:35:29.550813 systemd-logind[1292]: Session 10 logged out. Waiting for processes to exit. May 17 00:35:29.551800 systemd-logind[1292]: Removed session 10. May 17 00:35:29.661859 env[1307]: time="2025-05-17T00:35:29.661767418Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:29.666844 env[1307]: time="2025-05-17T00:35:29.666802344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:29.668852 env[1307]: time="2025-05-17T00:35:29.668821397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:29.670659 env[1307]: time="2025-05-17T00:35:29.670630795Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:29.671081 env[1307]: time="2025-05-17T00:35:29.670956686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:35:29.673291 env[1307]: time="2025-05-17T00:35:29.673258480Z" level=info msg="CreateContainer within sandbox \"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:35:29.688990 env[1307]: time="2025-05-17T00:35:29.688937627Z" level=info msg="CreateContainer within sandbox \"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c6ec5db23f324178d495812d69a0034f37512d47003b30a1c060e7c585e5e72\"" May 17 00:35:29.689447 env[1307]: time="2025-05-17T00:35:29.689422057Z" level=info msg="StartContainer for \"0c6ec5db23f324178d495812d69a0034f37512d47003b30a1c060e7c585e5e72\"" May 17 00:35:29.846671 env[1307]: time="2025-05-17T00:35:29.846510702Z" level=info msg="StartContainer for \"0c6ec5db23f324178d495812d69a0034f37512d47003b30a1c060e7c585e5e72\" returns successfully" May 17 00:35:29.847786 env[1307]: time="2025-05-17T00:35:29.847752514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:35:32.139356 env[1307]: time="2025-05-17T00:35:32.139264641Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:32.155566 env[1307]: time="2025-05-17T00:35:32.155469852Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:32.157885 env[1307]: time="2025-05-17T00:35:32.157837968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:32.160168 env[1307]: time="2025-05-17T00:35:32.160113963Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:35:32.160437 env[1307]: time="2025-05-17T00:35:32.160386525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:35:32.162976 env[1307]: time="2025-05-17T00:35:32.162929570Z" level=info msg="CreateContainer within sandbox \"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:35:32.183384 env[1307]: time="2025-05-17T00:35:32.183325140Z" level=info msg="CreateContainer within sandbox \"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2ecf7b15c4af85098966ee1614ce3d68de260b95398b998f2d1cfa564ec7ad82\"" May 17 00:35:32.184045 env[1307]: time="2025-05-17T00:35:32.183980961Z" level=info msg="StartContainer for \"2ecf7b15c4af85098966ee1614ce3d68de260b95398b998f2d1cfa564ec7ad82\"" May 17 00:35:32.293100 env[1307]: time="2025-05-17T00:35:32.292998488Z" level=info msg="StartContainer for \"2ecf7b15c4af85098966ee1614ce3d68de260b95398b998f2d1cfa564ec7ad82\" returns successfully" May 17 00:35:32.985685 kubelet[2108]: I0517 00:35:32.985633 2108 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:35:32.985685 kubelet[2108]: I0517 00:35:32.985681 2108 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:35:34.549834 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:55044.service. May 17 00:35:34.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:55044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:34.551103 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:35:34.551173 kernel: audit: type=1130 audit(1747442134.548:448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:55044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:34.588000 audit[4825]: USER_ACCT pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.590007 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 55044 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:34.592000 audit[4825]: CRED_ACQ pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.593687 sshd[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:34.597438 kernel: audit: type=1101 audit(1747442134.588:449): pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.597515 kernel: audit: type=1103 audit(1747442134.592:450): pid=4825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.597543 kernel: audit: type=1006 audit(1747442134.592:451): pid=4825 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 17 00:35:34.597032 systemd-logind[1292]: New session 11 of user core. May 17 00:35:34.597986 systemd[1]: Started session-11.scope. May 17 00:35:34.592000 audit[4825]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61633990 a2=3 a3=0 items=0 ppid=1 pid=4825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:34.603465 kernel: audit: type=1300 audit(1747442134.592:451): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61633990 a2=3 a3=0 items=0 ppid=1 pid=4825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:34.603528 kernel: audit: type=1327 audit(1747442134.592:451): proctitle=737368643A20636F7265205B707269765D May 17 00:35:34.592000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:34.601000 audit[4825]: USER_START pid=4825 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.608973 kernel: audit: type=1105 audit(1747442134.601:452): pid=4825 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.609105 kernel: audit: type=1103 audit(1747442134.603:453): pid=4828 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.603000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.750908 sshd[4825]: pam_unix(sshd:session): session closed for user core May 17 00:35:34.751000 audit[4825]: USER_END pid=4825 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.753343 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:55058.service. May 17 00:35:34.754434 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:55044.service: Deactivated successfully. May 17 00:35:34.755272 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:35:34.755974 systemd-logind[1292]: Session 11 logged out. Waiting for processes to exit. May 17 00:35:34.760427 kernel: audit: type=1106 audit(1747442134.751:454): pid=4825 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.760495 kernel: audit: type=1104 audit(1747442134.751:455): pid=4825 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.751000 audit[4825]: CRED_DISP pid=4825 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.756974 systemd-logind[1292]: Removed session 11. May 17 00:35:34.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:55058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:34.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:55044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:34.788000 audit[4838]: USER_ACCT pid=4838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.789949 sshd[4838]: Accepted publickey for core from 10.0.0.1 port 55058 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:34.789000 audit[4838]: CRED_ACQ pid=4838 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.789000 audit[4838]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff989e3690 a2=3 a3=0 items=0 ppid=1 pid=4838 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:34.789000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:34.791170 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:34.794280 systemd-logind[1292]: New session 12 of user core. May 17 00:35:34.794936 systemd[1]: Started session-12.scope. May 17 00:35:34.797000 audit[4838]: USER_START pid=4838 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.798000 audit[4843]: CRED_ACQ pid=4843 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.933617 sshd[4838]: pam_unix(sshd:session): session closed for user core May 17 00:35:34.933000 audit[4838]: USER_END pid=4838 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.933000 audit[4838]: CRED_DISP pid=4838 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:55068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:34.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:55058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:34.936238 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:55068.service. May 17 00:35:34.937203 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:55058.service: Deactivated successfully. May 17 00:35:34.938028 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:35:34.939955 systemd-logind[1292]: Session 12 logged out. Waiting for processes to exit. May 17 00:35:34.941537 systemd-logind[1292]: Removed session 12. May 17 00:35:34.976000 audit[4850]: USER_ACCT pid=4850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.977657 sshd[4850]: Accepted publickey for core from 10.0.0.1 port 55068 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:34.977000 audit[4850]: CRED_ACQ pid=4850 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.977000 audit[4850]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1cbabb10 a2=3 a3=0 items=0 ppid=1 pid=4850 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:34.977000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:34.978756 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:34.982460 systemd-logind[1292]: New session 13 of user core. May 17 00:35:34.983212 systemd[1]: Started session-13.scope. May 17 00:35:34.985000 audit[4850]: USER_START pid=4850 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:34.986000 audit[4855]: CRED_ACQ pid=4855 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:35.084325 sshd[4850]: pam_unix(sshd:session): session closed for user core May 17 00:35:35.084000 audit[4850]: USER_END pid=4850 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:35.084000 audit[4850]: CRED_DISP pid=4850 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:35.086309 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:55068.service: Deactivated successfully. May 17 00:35:35.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:55068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:35.087344 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:35:35.087355 systemd-logind[1292]: Session 13 logged out. Waiting for processes to exit. May 17 00:35:35.088003 systemd-logind[1292]: Removed session 13. May 17 00:35:36.856415 env[1307]: time="2025-05-17T00:35:36.856316757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:35:36.864836 kubelet[2108]: I0517 00:35:36.864761 2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gb94f" podStartSLOduration=31.967151973 podStartE2EDuration="38.864733139s" podCreationTimestamp="2025-05-17 00:34:58 +0000 UTC" firstStartedPulling="2025-05-17 00:35:25.263889625 +0000 UTC m=+42.481643738" lastFinishedPulling="2025-05-17 00:35:32.161470791 +0000 UTC m=+49.379224904" observedRunningTime="2025-05-17 00:35:32.497375917 +0000 UTC m=+49.715130030" watchObservedRunningTime="2025-05-17 00:35:36.864733139 +0000 UTC m=+54.082487252" May 17 00:35:37.086008 env[1307]: time="2025-05-17T00:35:37.085917691Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:35:37.087064 env[1307]: time="2025-05-17T00:35:37.087014420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:35:37.087316 kubelet[2108]: E0517 00:35:37.087263 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:37.087411 kubelet[2108]: E0517 00:35:37.087324 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:35:37.087569 kubelet[2108]: E0517 00:35:37.087530 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3da8b4fd3b234db5a0b65fe67fcf7d29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d44b8df7-v6qvx_calico-system(664372a8-af86-4567-a233-d8be21950e7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:35:37.087983 env[1307]: time="2025-05-17T00:35:37.087929418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:35:37.345401 env[1307]: time="2025-05-17T00:35:37.345309096Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:35:37.377527 env[1307]: time="2025-05-17T00:35:37.377475998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:35:37.377775 kubelet[2108]: E0517 00:35:37.377715 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:37.377845 kubelet[2108]: E0517 00:35:37.377782 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:35:37.378096 kubelet[2108]: E0517 00:35:37.378028 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckmv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-zf9xd_calico-system(915e2165-3634-409e-af91-ef9388cac59f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:35:37.378383 env[1307]: time="2025-05-17T00:35:37.378221798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:35:37.379303 kubelet[2108]: E0517 00:35:37.379277 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:35:37.673662 env[1307]: time="2025-05-17T00:35:37.673520076Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:35:37.702215 env[1307]: time="2025-05-17T00:35:37.702143325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:35:37.702423 kubelet[2108]: E0517 00:35:37.702373 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:37.702623 kubelet[2108]: E0517 00:35:37.702438 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:35:37.702623 kubelet[2108]: E0517 00:35:37.702567 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d44b8df7-v6qvx_calico-system(664372a8-af86-4567-a233-d8be21950e7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:35:37.703776 kubelet[2108]: E0517 00:35:37.703739 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-55d44b8df7-v6qvx" podUID="664372a8-af86-4567-a233-d8be21950e7b" May 17 00:35:40.087398 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:43422.service. May 17 00:35:40.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:43422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:40.088463 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:35:40.088527 kernel: audit: type=1130 audit(1747442140.086:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:43422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:40.122000 audit[4872]: USER_ACCT pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.123757 sshd[4872]: Accepted publickey for core from 10.0.0.1 port 43422 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:40.125616 sshd[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:40.124000 audit[4872]: CRED_ACQ pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.129050 systemd-logind[1292]: New session 14 of user core. May 17 00:35:40.129740 systemd[1]: Started session-14.scope. May 17 00:35:40.131053 kernel: audit: type=1101 audit(1747442140.122:476): pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.131109 kernel: audit: type=1103 audit(1747442140.124:477): pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.131134 kernel: audit: type=1006 audit(1747442140.124:478): pid=4872 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 17 00:35:40.124000 audit[4872]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcde654b70 a2=3 a3=0 items=0 ppid=1 pid=4872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:40.137250 kernel: audit: type=1300 audit(1747442140.124:478): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcde654b70 a2=3 a3=0 items=0 ppid=1 pid=4872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:40.137320 kernel: audit: type=1327 audit(1747442140.124:478): proctitle=737368643A20636F7265205B707269765D May 17 00:35:40.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:40.133000 audit[4872]: USER_START pid=4872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.142725 kernel: audit: type=1105 audit(1747442140.133:479): pid=4872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.142768 kernel: audit: type=1103 audit(1747442140.134:480): pid=4875 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.134000 audit[4875]: CRED_ACQ pid=4875 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.231610 sshd[4872]: pam_unix(sshd:session): session closed for user core May 17 00:35:40.231000 audit[4872]: USER_END pid=4872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.234079 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:43422.service: Deactivated successfully. May 17 00:35:40.234853 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:35:40.231000 audit[4872]: CRED_DISP pid=4872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.238754 systemd-logind[1292]: Session 14 logged out. Waiting for processes to exit. May 17 00:35:40.239437 systemd-logind[1292]: Removed session 14. May 17 00:35:40.240294 kernel: audit: type=1106 audit(1747442140.231:481): pid=4872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.240399 kernel: audit: type=1104 audit(1747442140.231:482): pid=4872 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:40.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:43422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:40.469165 kubelet[2108]: I0517 00:35:40.469035 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:40.502000 audit[4887]: NETFILTER_CFG table=filter:124 family=2 entries=13 op=nft_register_rule pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:40.502000 audit[4887]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe428e2c80 a2=0 a3=7ffe428e2c6c items=0 ppid=2253 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:40.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:40.508000 audit[4887]: NETFILTER_CFG table=nat:125 family=2 entries=27 op=nft_register_chain pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:40.508000 audit[4887]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe428e2c80 a2=0 a3=7ffe428e2c6c items=0 ppid=2253 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:40.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:42.879627 env[1307]: time="2025-05-17T00:35:42.879563556Z" level=info msg="StopPodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\"" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.905 [WARNING][4907] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" WorkloadEndpoint="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.905 [INFO][4907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.905 [INFO][4907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" iface="eth0" netns="" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.906 [INFO][4907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.906 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.922 [INFO][4915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.922 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.922 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.927 [WARNING][4915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.927 [INFO][4915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.929 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:42.932756 env[1307]: 2025-05-17 00:35:42.931 [INFO][4907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.937782 env[1307]: time="2025-05-17T00:35:42.932779201Z" level=info msg="TearDown network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" successfully" May 17 00:35:42.937782 env[1307]: time="2025-05-17T00:35:42.932813575Z" level=info msg="StopPodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" returns successfully" May 17 00:35:42.937782 env[1307]: time="2025-05-17T00:35:42.933402320Z" level=info msg="RemovePodSandbox for \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\"" May 17 00:35:42.937782 env[1307]: time="2025-05-17T00:35:42.933426976Z" level=info msg="Forcibly stopping sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\"" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.962 [WARNING][4933] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" WorkloadEndpoint="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.963 [INFO][4933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.963 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" iface="eth0" netns="" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.963 [INFO][4933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.963 [INFO][4933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.981 [INFO][4942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.981 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.981 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.986 [WARNING][4942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.986 [INFO][4942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" HandleID="k8s-pod-network.00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" Workload="localhost-k8s-whisker--5cfb7c6489--cwwkf-eth0" May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.987 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:42.990418 env[1307]: 2025-05-17 00:35:42.989 [INFO][4933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7" May 17 00:35:42.990881 env[1307]: time="2025-05-17T00:35:42.990448025Z" level=info msg="TearDown network for sandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" successfully" May 17 00:35:43.071185 env[1307]: time="2025-05-17T00:35:43.071126789Z" level=info msg="RemovePodSandbox \"00d5214710b66f8ba1e154fe3f64c4596c82545b43b9bf48efa4d56c4ff37fb7\" returns successfully" May 17 00:35:43.071728 env[1307]: time="2025-05-17T00:35:43.071697290Z" level=info msg="StopPodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\"" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.101 [WARNING][4959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789", Pod:"calico-apiserver-dd64f56db-gn2th", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1070e847677", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.101 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.101 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" iface="eth0" netns="" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.101 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.101 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.119 [INFO][4968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.119 [INFO][4968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.119 [INFO][4968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.125 [WARNING][4968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.125 [INFO][4968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.128 [INFO][4968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.131374 env[1307]: 2025-05-17 00:35:43.129 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.131374 env[1307]: time="2025-05-17T00:35:43.131339275Z" level=info msg="TearDown network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" successfully" May 17 00:35:43.132212 env[1307]: time="2025-05-17T00:35:43.131383268Z" level=info msg="StopPodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" returns successfully" May 17 00:35:43.132212 env[1307]: time="2025-05-17T00:35:43.132065419Z" level=info msg="RemovePodSandbox for \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\"" May 17 00:35:43.132212 env[1307]: time="2025-05-17T00:35:43.132117126Z" level=info msg="Forcibly stopping sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\"" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.160 [WARNING][4987] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffd190ac-644b-4bbf-bd2f-feed5f4c93a6", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df78e18b63068c3876f9e3c0c30bf2b3be69e601b65a3ee617f67cbbf0005789", Pod:"calico-apiserver-dd64f56db-gn2th", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1070e847677", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.161 [INFO][4987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.161 [INFO][4987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" iface="eth0" netns="" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.161 [INFO][4987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.161 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.178 [INFO][4995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.178 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.178 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.184 [WARNING][4995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.184 [INFO][4995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" HandleID="k8s-pod-network.16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" Workload="localhost-k8s-calico--apiserver--dd64f56db--gn2th-eth0" May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.186 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.189500 env[1307]: 2025-05-17 00:35:43.187 [INFO][4987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7" May 17 00:35:43.189959 env[1307]: time="2025-05-17T00:35:43.189540536Z" level=info msg="TearDown network for sandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" successfully" May 17 00:35:43.192920 env[1307]: time="2025-05-17T00:35:43.192882279Z" level=info msg="RemovePodSandbox \"16858c775bab4c6b3ab697c419a0e19683b27180aafcb4f63b09768cea83a5b7\" returns successfully" May 17 00:35:43.193509 env[1307]: time="2025-05-17T00:35:43.193463279Z" level=info msg="StopPodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\"" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.221 [WARNING][5013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p882x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aecaa202-f800-4402-b8be-d457733677a8", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f", Pod:"coredns-7c65d6cfc9-p882x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e98e5b5718", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.221 [INFO][5013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.221 [INFO][5013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" iface="eth0" netns="" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.221 [INFO][5013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.221 [INFO][5013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.239 [INFO][5022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.239 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.240 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.245 [WARNING][5022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.245 [INFO][5022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.246 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.249140 env[1307]: 2025-05-17 00:35:43.247 [INFO][5013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.249609 env[1307]: time="2025-05-17T00:35:43.249167092Z" level=info msg="TearDown network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" successfully" May 17 00:35:43.249609 env[1307]: time="2025-05-17T00:35:43.249204843Z" level=info msg="StopPodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" returns successfully" May 17 00:35:43.249838 env[1307]: time="2025-05-17T00:35:43.249805571Z" level=info msg="RemovePodSandbox for \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\"" May 17 00:35:43.249887 env[1307]: time="2025-05-17T00:35:43.249847659Z" level=info msg="Forcibly stopping sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\"" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.278 [WARNING][5040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--p882x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aecaa202-f800-4402-b8be-d457733677a8", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8832f3886987316ab962e205fca03432fd9fd297e51c2b3436fcbee6fd352b1f", Pod:"coredns-7c65d6cfc9-p882x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e98e5b5718", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.278 [INFO][5040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.278 [INFO][5040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" iface="eth0" netns="" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.278 [INFO][5040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.278 [INFO][5040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.295 [INFO][5049] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.295 [INFO][5049] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.295 [INFO][5049] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.300 [WARNING][5049] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.300 [INFO][5049] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" HandleID="k8s-pod-network.d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" Workload="localhost-k8s-coredns--7c65d6cfc9--p882x-eth0" May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.301 [INFO][5049] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.304686 env[1307]: 2025-05-17 00:35:43.303 [INFO][5040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93" May 17 00:35:43.305175 env[1307]: time="2025-05-17T00:35:43.304702449Z" level=info msg="TearDown network for sandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" successfully" May 17 00:35:43.308160 env[1307]: time="2025-05-17T00:35:43.308118692Z" level=info msg="RemovePodSandbox \"d2ffaeb0cd8e369eebb37020a461765efe7bee01428374d0bd3e2d709412db93\" returns successfully" May 17 00:35:43.308672 env[1307]: time="2025-05-17T00:35:43.308643807Z" level=info msg="StopPodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\"" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.339 [WARNING][5067] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0", GenerateName:"calico-kube-controllers-8db7c4fcb-", Namespace:"calico-system", SelfLink:"", UID:"da68fa8b-b750-48ef-8ed0-edd244e098a4", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8db7c4fcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f", Pod:"calico-kube-controllers-8db7c4fcb-w875d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84c4cc541eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.339 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.339 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" iface="eth0" netns="" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.339 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.339 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.355 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.355 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.355 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.361 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.361 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.363 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.365990 env[1307]: 2025-05-17 00:35:43.364 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.366552 env[1307]: time="2025-05-17T00:35:43.366507335Z" level=info msg="TearDown network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" successfully" May 17 00:35:43.366552 env[1307]: time="2025-05-17T00:35:43.366548161Z" level=info msg="StopPodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" returns successfully" May 17 00:35:43.367083 env[1307]: time="2025-05-17T00:35:43.367040355Z" level=info msg="RemovePodSandbox for \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\"" May 17 00:35:43.367136 env[1307]: time="2025-05-17T00:35:43.367082214Z" level=info msg="Forcibly stopping sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\"" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.394 [WARNING][5094] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0", GenerateName:"calico-kube-controllers-8db7c4fcb-", Namespace:"calico-system", SelfLink:"", UID:"da68fa8b-b750-48ef-8ed0-edd244e098a4", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8db7c4fcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"438bbd073954db0bc0086d1722df931db15f69954a71896686500c7ae7d6ad1f", Pod:"calico-kube-controllers-8db7c4fcb-w875d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84c4cc541eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.395 [INFO][5094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.395 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" iface="eth0" netns="" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.395 [INFO][5094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.395 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.413 [INFO][5103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.413 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.413 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.419 [WARNING][5103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.419 [INFO][5103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" HandleID="k8s-pod-network.d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" Workload="localhost-k8s-calico--kube--controllers--8db7c4fcb--w875d-eth0" May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.420 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.423854 env[1307]: 2025-05-17 00:35:43.422 [INFO][5094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d" May 17 00:35:43.423854 env[1307]: time="2025-05-17T00:35:43.423824315Z" level=info msg="TearDown network for sandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" successfully" May 17 00:35:43.427451 env[1307]: time="2025-05-17T00:35:43.427421838Z" level=info msg="RemovePodSandbox \"d1156c54fa45e94d26c315174139796834572940b4002010dc0e6bec89757d4d\" returns successfully" May 17 00:35:43.427988 env[1307]: time="2025-05-17T00:35:43.427958425Z" level=info msg="StopPodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\"" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.456 [WARNING][5121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"915e2165-3634-409e-af91-ef9388cac59f", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80", Pod:"goldmane-8f77d7b6c-zf9xd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidac0a455c90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.456 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.456 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" iface="eth0" netns="" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.456 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.457 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.473 [INFO][5130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.473 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.473 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.478 [WARNING][5130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.478 [INFO][5130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.481 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.484885 env[1307]: 2025-05-17 00:35:43.483 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.485609 env[1307]: time="2025-05-17T00:35:43.485097322Z" level=info msg="TearDown network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" successfully" May 17 00:35:43.485609 env[1307]: time="2025-05-17T00:35:43.485145773Z" level=info msg="StopPodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" returns successfully" May 17 00:35:43.485703 env[1307]: time="2025-05-17T00:35:43.485673554Z" level=info msg="RemovePodSandbox for \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\"" May 17 00:35:43.485755 env[1307]: time="2025-05-17T00:35:43.485715282Z" level=info msg="Forcibly stopping sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\"" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.519 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"915e2165-3634-409e-af91-ef9388cac59f", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c77f26ef8a678d9e2100c2fd4cdaed03d958c7ea90ed73682fd761f19a08cb80", Pod:"goldmane-8f77d7b6c-zf9xd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidac0a455c90", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.520 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.520 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" iface="eth0" netns="" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.520 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.520 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.547 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.547 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.547 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.554 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.554 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" HandleID="k8s-pod-network.c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" Workload="localhost-k8s-goldmane--8f77d7b6c--zf9xd-eth0" May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.556 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.561628 env[1307]: 2025-05-17 00:35:43.559 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449" May 17 00:35:43.562120 env[1307]: time="2025-05-17T00:35:43.561662755Z" level=info msg="TearDown network for sandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" successfully" May 17 00:35:43.565843 env[1307]: time="2025-05-17T00:35:43.565788790Z" level=info msg="RemovePodSandbox \"c92048b60d243b6f2e585d382e0a3be167213d7d321d4ead1dee93c950531449\" returns successfully" May 17 00:35:43.566310 env[1307]: time="2025-05-17T00:35:43.566274982Z" level=info msg="StopPodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\"" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.645 [WARNING][5173] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35c5827-c746-454d-b6bf-a8a0e8b71713", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b", Pod:"calico-apiserver-dd64f56db-z62dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a6234a4a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.645 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.646 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" iface="eth0" netns="" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.646 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.646 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.673 [INFO][5182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.673 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.673 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.683 [WARNING][5182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.683 [INFO][5182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.684 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.687754 env[1307]: 2025-05-17 00:35:43.685 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.687754 env[1307]: time="2025-05-17T00:35:43.687718096Z" level=info msg="TearDown network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" successfully" May 17 00:35:43.687754 env[1307]: time="2025-05-17T00:35:43.687751078Z" level=info msg="StopPodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" returns successfully" May 17 00:35:43.688683 env[1307]: time="2025-05-17T00:35:43.688632422Z" level=info msg="RemovePodSandbox for \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\"" May 17 00:35:43.688853 env[1307]: time="2025-05-17T00:35:43.688686003Z" level=info msg="Forcibly stopping sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\"" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.730 [WARNING][5199] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0", GenerateName:"calico-apiserver-dd64f56db-", Namespace:"calico-apiserver", SelfLink:"", UID:"b35c5827-c746-454d-b6bf-a8a0e8b71713", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd64f56db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2ea67a69cdd1ad166a498dfd0061e0aca9690214ac5168f38e441da970f18b", Pod:"calico-apiserver-dd64f56db-z62dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a6234a4a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.731 [INFO][5199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.731 [INFO][5199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" iface="eth0" netns="" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.731 [INFO][5199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.731 [INFO][5199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.757 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.758 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.758 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.765 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.765 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" HandleID="k8s-pod-network.4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" Workload="localhost-k8s-calico--apiserver--dd64f56db--z62dt-eth0" May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.766 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.769985 env[1307]: 2025-05-17 00:35:43.768 [INFO][5199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da" May 17 00:35:43.770640 env[1307]: time="2025-05-17T00:35:43.770603001Z" level=info msg="TearDown network for sandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" successfully" May 17 00:35:43.773954 env[1307]: time="2025-05-17T00:35:43.773924837Z" level=info msg="RemovePodSandbox \"4e69d7d5918813a8d171cf9a9a301058628cb69ab346a3a93b7370b5c084e3da\" returns successfully" May 17 00:35:43.774490 env[1307]: time="2025-05-17T00:35:43.774465672Z" level=info msg="StopPodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\"" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.806 [WARNING][5225] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"156114d2-bfb2-42a0-a77e-b4eed0e196ef", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5", Pod:"coredns-7c65d6cfc9-h6snv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8337b6fea0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.807 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.807 [INFO][5225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" iface="eth0" netns="" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.807 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.807 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.825 [INFO][5234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.825 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.825 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.829 [WARNING][5234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.830 [INFO][5234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.831 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.834169 env[1307]: 2025-05-17 00:35:43.832 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.834649 env[1307]: time="2025-05-17T00:35:43.834190143Z" level=info msg="TearDown network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" successfully" May 17 00:35:43.834649 env[1307]: time="2025-05-17T00:35:43.834223395Z" level=info msg="StopPodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" returns successfully" May 17 00:35:43.834828 env[1307]: time="2025-05-17T00:35:43.834769523Z" level=info msg="RemovePodSandbox for \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\"" May 17 00:35:43.834890 env[1307]: time="2025-05-17T00:35:43.834841219Z" level=info msg="Forcibly stopping sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\"" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.865 [WARNING][5253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"156114d2-bfb2-42a0-a77e-b4eed0e196ef", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0794a681479f79818b6819b38dbe79e0a18d76f030765beb8c320d729299b2f5", Pod:"coredns-7c65d6cfc9-h6snv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8337b6fea0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.865 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.865 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" iface="eth0" netns="" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.865 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.865 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.889 [INFO][5261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.890 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.890 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.895 [WARNING][5261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.896 [INFO][5261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" HandleID="k8s-pod-network.ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" Workload="localhost-k8s-coredns--7c65d6cfc9--h6snv-eth0" May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.897 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.900214 env[1307]: 2025-05-17 00:35:43.898 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c" May 17 00:35:43.901100 env[1307]: time="2025-05-17T00:35:43.900228265Z" level=info msg="TearDown network for sandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" successfully" May 17 00:35:43.905187 env[1307]: time="2025-05-17T00:35:43.905136386Z" level=info msg="RemovePodSandbox \"ab2ab67e613a7b2cdd86eb6ba7e8961876335054649b261eaff6894d3474c70c\" returns successfully" May 17 00:35:43.905703 env[1307]: time="2025-05-17T00:35:43.905681031Z" level=info msg="StopPodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\"" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.935 [WARNING][5280] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gb94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a10bef1-407b-40ca-9b52-a14544f402bf", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f", Pod:"csi-node-driver-gb94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80c1e569d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.935 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.935 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" iface="eth0" netns="" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.935 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.935 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.952 [INFO][5288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.953 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.953 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.957 [WARNING][5288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.957 [INFO][5288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.958 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:43.963036 env[1307]: 2025-05-17 00:35:43.960 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:43.963036 env[1307]: time="2025-05-17T00:35:43.961977613Z" level=info msg="TearDown network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" successfully" May 17 00:35:43.963036 env[1307]: time="2025-05-17T00:35:43.962011867Z" level=info msg="StopPodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" returns successfully" May 17 00:35:43.963036 env[1307]: time="2025-05-17T00:35:43.962885321Z" level=info msg="RemovePodSandbox for \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\"" May 17 00:35:43.963036 env[1307]: time="2025-05-17T00:35:43.962949592Z" level=info msg="Forcibly stopping sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\"" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:43.993 [WARNING][5305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gb94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a10bef1-407b-40ca-9b52-a14544f402bf", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e19dfe19d4b6e7a9458ec064ec78e46b90f68bed3668b25499494c3fe8db81f", Pod:"csi-node-driver-gb94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80c1e569d11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:43.994 [INFO][5305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:43.994 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" iface="eth0" netns="" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:43.994 [INFO][5305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:43.994 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.012 [INFO][5314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.012 [INFO][5314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.012 [INFO][5314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.017 [WARNING][5314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.017 [INFO][5314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" HandleID="k8s-pod-network.b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" Workload="localhost-k8s-csi--node--driver--gb94f-eth0" May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.018 [INFO][5314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:35:44.022127 env[1307]: 2025-05-17 00:35:44.020 [INFO][5305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1" May 17 00:35:44.022731 env[1307]: time="2025-05-17T00:35:44.022154530Z" level=info msg="TearDown network for sandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" successfully" May 17 00:35:44.025474 env[1307]: time="2025-05-17T00:35:44.025449529Z" level=info msg="RemovePodSandbox \"b19bb651e6b45f154d8f308cc57e23573a0f067f308b1b87c106f3b23e570ab1\" returns successfully" May 17 00:35:45.235206 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:43426.service. May 17 00:35:45.240243 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:35:45.240276 kernel: audit: type=1130 audit(1747442145.234:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:43426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:45.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:43426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:45.273000 audit[5321]: USER_ACCT pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.274714 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 43426 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:45.276593 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:45.275000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.284555 kernel: audit: type=1101 audit(1747442145.273:487): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.284643 kernel: audit: type=1103 audit(1747442145.275:488): pid=5321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.284664 kernel: audit: type=1006 audit(1747442145.275:489): pid=5321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 00:35:45.282133 systemd-logind[1292]: New session 15 of user core. May 17 00:35:45.282730 systemd[1]: Started session-15.scope. May 17 00:35:45.275000 audit[5321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe125cb1f0 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:45.288918 kernel: audit: type=1300 audit(1747442145.275:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe125cb1f0 a2=3 a3=0 items=0 ppid=1 pid=5321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:45.288969 kernel: audit: type=1327 audit(1747442145.275:489): proctitle=737368643A20636F7265205B707269765D May 17 00:35:45.275000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:45.290269 kernel: audit: type=1105 audit(1747442145.286:490): pid=5321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.286000 audit[5321]: USER_START pid=5321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.294458 kernel: audit: type=1103 audit(1747442145.287:491): pid=5324 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.287000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.434590 sshd[5321]: pam_unix(sshd:session): session closed for user core May 17 00:35:45.434000 audit[5321]: USER_END pid=5321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.436886 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:43426.service: Deactivated successfully. May 17 00:35:45.438039 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:35:45.438430 systemd-logind[1292]: Session 15 logged out. Waiting for processes to exit. May 17 00:35:45.439272 systemd-logind[1292]: Removed session 15. May 17 00:35:45.434000 audit[5321]: CRED_DISP pid=5321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.444332 kernel: audit: type=1106 audit(1747442145.434:492): pid=5321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.444382 kernel: audit: type=1104 audit(1747442145.434:493): pid=5321 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:45.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:43426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:47.616087 kubelet[2108]: I0517 00:35:47.616029 2108 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:35:47.642000 audit[5340]: NETFILTER_CFG table=filter:126 family=2 entries=12 op=nft_register_rule pid=5340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:47.642000 audit[5340]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fffe4146140 a2=0 a3=7fffe414612c items=0 ppid=2253 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:47.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:47.649000 audit[5340]: NETFILTER_CFG table=nat:127 family=2 entries=34 op=nft_register_chain pid=5340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:35:47.649000 audit[5340]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fffe4146140 a2=0 a3=7fffe414612c items=0 ppid=2253 pid=5340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:47.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:35:49.855914 kubelet[2108]: E0517 00:35:49.855852 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:35:50.438345 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:35348.service. May 17 00:35:50.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:50.439435 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:35:50.439508 kernel: audit: type=1130 audit(1747442150.438:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:50.479000 audit[5343]: USER_ACCT pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.479924 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 35348 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:50.484104 kernel: audit: type=1101 audit(1747442150.479:498): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.484478 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:50.483000 audit[5343]: CRED_ACQ pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.490228 systemd-logind[1292]: New session 16 of user core. May 17 00:35:50.490560 kernel: audit: type=1103 audit(1747442150.483:499): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.490592 kernel: audit: type=1006 audit(1747442150.484:500): pid=5343 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 17 00:35:50.490606 kernel: audit: type=1300 audit(1747442150.484:500): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceecfc040 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:50.484000 audit[5343]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceecfc040 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:50.491264 systemd[1]: Started session-16.scope. May 17 00:35:50.484000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:50.495754 kernel: audit: type=1327 audit(1747442150.484:500): proctitle=737368643A20636F7265205B707269765D May 17 00:35:50.496000 audit[5343]: USER_START pid=5343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.497000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.504360 kernel: audit: type=1105 audit(1747442150.496:501): pid=5343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.504420 kernel: audit: type=1103 audit(1747442150.497:502): pid=5346 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.633328 sshd[5343]: pam_unix(sshd:session): session closed for user core May 17 00:35:50.634000 audit[5343]: USER_END pid=5343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.635767 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:35348.service: Deactivated successfully. May 17 00:35:50.636938 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:35:50.637000 systemd-logind[1292]: Session 16 logged out. Waiting for processes to exit. May 17 00:35:50.637957 systemd-logind[1292]: Removed session 16. May 17 00:35:50.634000 audit[5343]: CRED_DISP pid=5343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.641957 kernel: audit: type=1106 audit(1747442150.634:503): pid=5343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.642113 kernel: audit: type=1104 audit(1747442150.634:504): pid=5343 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:50.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:35348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:50.857028 kubelet[2108]: E0517 00:35:50.856953 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-55d44b8df7-v6qvx" podUID="664372a8-af86-4567-a233-d8be21950e7b" May 17 00:35:53.855491 kubelet[2108]: E0517 00:35:53.855418 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:35:55.636756 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:35352.service. May 17 00:35:55.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:35352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.639666 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:35:55.639784 kernel: audit: type=1130 audit(1747442155.636:506): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:35352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:55.680000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.682358 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 35352 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:35:55.682000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.687153 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:35:55.691539 kernel: audit: type=1101 audit(1747442155.680:507): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.691686 kernel: audit: type=1103 audit(1747442155.682:508): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.691721 kernel: audit: type=1006 audit(1747442155.682:509): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 17 00:35:55.692386 systemd-logind[1292]: New session 17 of user core. May 17 00:35:55.692931 systemd[1]: Started session-17.scope. May 17 00:35:55.682000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8aecbce0 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:55.702849 kernel: audit: type=1300 audit(1747442155.682:509): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8aecbce0 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:35:55.703015 kernel: audit: type=1327 audit(1747442155.682:509): proctitle=737368643A20636F7265205B707269765D May 17 00:35:55.703042 kernel: audit: type=1105 audit(1747442155.698:510): pid=5398 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.682000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:35:55.698000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.700000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.713126 kernel: audit: type=1103 audit(1747442155.700:511): pid=5401 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.941638 sshd[5398]: pam_unix(sshd:session): session closed for user core May 17 00:35:55.941000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.944340 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:35352.service: Deactivated successfully. May 17 00:35:55.945394 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:35:55.945420 systemd-logind[1292]: Session 17 logged out. Waiting for processes to exit. May 17 00:35:55.946512 systemd-logind[1292]: Removed session 17. May 17 00:35:55.948106 kernel: audit: type=1106 audit(1747442155.941:512): pid=5398 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.948167 kernel: audit: type=1104 audit(1747442155.941:513): pid=5398 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.941000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:35:55.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:35352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:35:57.855133 kubelet[2108]: E0517 00:35:57.855061 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:36:00.944650 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:45030.service. May 17 00:36:00.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:45030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:00.945935 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:36:00.946055 kernel: audit: type=1130 audit(1747442160.943:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:45030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:00.982000 audit[5418]: USER_ACCT pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:00.984448 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 45030 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:00.986000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:00.988305 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:00.994201 kernel: audit: type=1101 audit(1747442160.982:516): pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:00.994328 kernel: audit: type=1103 audit(1747442160.986:517): pid=5418 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:00.994361 kernel: audit: type=1006 audit(1747442160.986:518): pid=5418 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 17 00:36:00.994387 kernel: audit: type=1300 audit(1747442160.986:518): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff95af47d0 a2=3 a3=0 items=0 ppid=1 pid=5418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:00.986000 audit[5418]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff95af47d0 a2=3 a3=0 items=0 ppid=1 pid=5418 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:00.992791 systemd[1]: Started session-18.scope. May 17 00:36:00.993219 systemd-logind[1292]: New session 18 of user core. May 17 00:36:00.986000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:00.999918 kernel: audit: type=1327 audit(1747442160.986:518): proctitle=737368643A20636F7265205B707269765D May 17 00:36:00.999989 kernel: audit: type=1105 audit(1747442160.997:519): pid=5418 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:00.997000 audit[5418]: USER_START pid=5418 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:00.999000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.022090 kernel: audit: type=1103 audit(1747442160.999:520): pid=5421 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.120668 sshd[5418]: pam_unix(sshd:session): session closed for user core May 17 00:36:01.120000 audit[5418]: USER_END pid=5418 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.124184 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:45034.service. May 17 00:36:01.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:45034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:01.129670 kernel: audit: type=1106 audit(1747442161.120:521): pid=5418 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.129721 kernel: audit: type=1130 audit(1747442161.122:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:45034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:01.129000 audit[5418]: CRED_DISP pid=5418 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.131132 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:45030.service: Deactivated successfully. May 17 00:36:01.132016 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:36:01.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:45030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:01.133106 systemd-logind[1292]: Session 18 logged out. Waiting for processes to exit. May 17 00:36:01.133908 systemd-logind[1292]: Removed session 18. May 17 00:36:01.160000 audit[5430]: USER_ACCT pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.161000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.161000 audit[5430]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe63c39a40 a2=3 a3=0 items=0 ppid=1 pid=5430 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:01.161000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:01.162695 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:01.162771 sshd[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:01.166704 systemd-logind[1292]: New session 19 of user core. May 17 00:36:01.167398 systemd[1]: Started session-19.scope. May 17 00:36:01.170000 audit[5430]: USER_START pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.172000 audit[5435]: CRED_ACQ pid=5435 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.432023 sshd[5430]: pam_unix(sshd:session): session closed for user core May 17 00:36:01.435538 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:45036.service. May 17 00:36:01.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.116:22-10.0.0.1:45036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:01.435000 audit[5430]: USER_END pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.435000 audit[5430]: CRED_DISP pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.437523 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:45034.service: Deactivated successfully. May 17 00:36:01.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:45034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:01.438449 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:36:01.439462 systemd-logind[1292]: Session 19 logged out. Waiting for processes to exit. May 17 00:36:01.440366 systemd-logind[1292]: Removed session 19. May 17 00:36:01.474000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.475632 sshd[5443]: Accepted publickey for core from 10.0.0.1 port 45036 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:01.475000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.475000 audit[5443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2c184a40 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:01.475000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:01.476645 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:01.481354 systemd[1]: Started session-20.scope. May 17 00:36:01.481785 systemd-logind[1292]: New session 20 of user core. May 17 00:36:01.486000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.487000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:01.855811 env[1307]: time="2025-05-17T00:36:01.855548718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:36:02.105897 env[1307]: time="2025-05-17T00:36:02.105656555Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:36:02.108614 env[1307]: time="2025-05-17T00:36:02.108513675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:36:02.108842 kubelet[2108]: E0517 00:36:02.108803 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:36:02.109201 kubelet[2108]: E0517 00:36:02.109179 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:36:02.109423 kubelet[2108]: E0517 00:36:02.109386 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckmv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-zf9xd_calico-system(915e2165-3634-409e-af91-ef9388cac59f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:36:02.110957 kubelet[2108]: E0517 00:36:02.110926 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:36:03.283811 sshd[5443]: pam_unix(sshd:session): session closed for user core May 17 00:36:03.286480 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:45038.service. May 17 00:36:03.285000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.285000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:45038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:03.287616 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:45036.service: Deactivated successfully. May 17 00:36:03.288517 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:36:03.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.116:22-10.0.0.1:45036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:03.290385 systemd-logind[1292]: Session 20 logged out. Waiting for processes to exit. May 17 00:36:03.291410 systemd-logind[1292]: Removed session 20. May 17 00:36:03.302000 audit[5465]: NETFILTER_CFG table=filter:128 family=2 entries=24 op=nft_register_rule pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:03.302000 audit[5465]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffdd62168c0 a2=0 a3=7ffdd62168ac items=0 ppid=2253 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:03.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:03.309000 audit[5465]: NETFILTER_CFG table=nat:129 family=2 entries=22 op=nft_register_rule pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:03.309000 audit[5465]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdd62168c0 a2=0 a3=0 items=0 ppid=2253 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:03.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:03.327000 audit[5468]: NETFILTER_CFG table=filter:130 family=2 entries=36 op=nft_register_rule pid=5468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:03.327000 audit[5468]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7fff6b1e1650 a2=0 a3=7fff6b1e163c items=0 ppid=2253 pid=5468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:03.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:03.331000 audit[5462]: USER_ACCT pid=5462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.332220 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 45038 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:03.332000 audit[5462]: CRED_ACQ pid=5462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.332000 audit[5462]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe12845a40 a2=3 a3=0 items=0 ppid=1 pid=5462 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:03.332000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:03.333422 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:03.334000 audit[5468]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:03.334000 audit[5468]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff6b1e1650 a2=0 a3=0 items=0 ppid=2253 pid=5468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:03.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:03.337808 systemd[1]: Started session-21.scope. May 17 00:36:03.338700 systemd-logind[1292]: New session 21 of user core. May 17 00:36:03.342000 audit[5462]: USER_START pid=5462 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.343000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.868222 sshd[5462]: pam_unix(sshd:session): session closed for user core May 17 00:36:03.870579 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:45048.service. May 17 00:36:03.869000 audit[5462]: USER_END pid=5462 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.869000 audit[5462]: CRED_DISP pid=5462 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:45048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:03.872435 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:45038.service: Deactivated successfully. May 17 00:36:03.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:45038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:03.873946 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:36:03.874654 systemd-logind[1292]: Session 21 logged out. Waiting for processes to exit. May 17 00:36:03.877272 systemd-logind[1292]: Removed session 21. May 17 00:36:03.910000 audit[5477]: USER_ACCT pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.912296 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 45048 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:03.912000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.912000 audit[5477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd22857130 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:03.912000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:03.913396 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:03.918239 systemd[1]: Started session-22.scope. May 17 00:36:03.919398 systemd-logind[1292]: New session 22 of user core. May 17 00:36:03.923000 audit[5477]: USER_START pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:03.924000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:04.050185 sshd[5477]: pam_unix(sshd:session): session closed for user core May 17 00:36:04.050000 audit[5477]: USER_END pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:04.050000 audit[5477]: CRED_DISP pid=5477 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:04.052945 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:45048.service: Deactivated successfully. May 17 00:36:04.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:45048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:04.053960 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:36:04.053966 systemd-logind[1292]: Session 22 logged out. Waiting for processes to exit. May 17 00:36:04.054969 systemd-logind[1292]: Removed session 22. May 17 00:36:05.855719 env[1307]: time="2025-05-17T00:36:05.855643692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:36:06.132225 env[1307]: time="2025-05-17T00:36:06.132050892Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:36:06.133161 env[1307]: time="2025-05-17T00:36:06.133127252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:36:06.133414 kubelet[2108]: E0517 00:36:06.133356 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:36:06.133414 kubelet[2108]: E0517 00:36:06.133421 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:36:06.133838 kubelet[2108]: E0517 00:36:06.133556 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3da8b4fd3b234db5a0b65fe67fcf7d29,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wrbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d44b8df7-v6qvx_calico-system(664372a8-af86-4567-a233-d8be21950e7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:36:06.135684 env[1307]: time="2025-05-17T00:36:06.135638103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:36:06.347726 env[1307]: time="2025-05-17T00:36:06.347445849Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:36:06.376975 env[1307]: time="2025-05-17T00:36:06.376877106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:36:06.377217 kubelet[2108]: E0517 00:36:06.377178 2108 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:36:06.377281 kubelet[2108]: E0517 00:36:06.377235 2108 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:36:06.377394 kubelet[2108]: E0517 00:36:06.377350 2108 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrbb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d44b8df7-v6qvx_calico-system(664372a8-af86-4567-a233-d8be21950e7b): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:36:06.378615 kubelet[2108]: E0517 00:36:06.378534 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-55d44b8df7-v6qvx" podUID="664372a8-af86-4567-a233-d8be21950e7b" May 17 00:36:08.659000 audit[5517]: NETFILTER_CFG table=filter:132 family=2 entries=24 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:08.662354 kernel: kauditd_printk_skb: 57 callbacks suppressed May 17 00:36:08.662503 kernel: audit: type=1325 audit(1747442168.659:564): table=filter:132 family=2 entries=24 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:08.659000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff4b1c0da0 a2=0 a3=7fff4b1c0d8c items=0 ppid=2253 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:08.670257 kernel: audit: type=1300 audit(1747442168.659:564): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff4b1c0da0 a2=0 a3=7fff4b1c0d8c items=0 ppid=2253 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:08.670338 kernel: audit: type=1327 audit(1747442168.659:564): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:08.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:08.687921 kernel: audit: type=1325 audit(1747442168.674:565): table=nat:133 family=2 entries=106 op=nft_register_chain pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:08.688096 kernel: audit: type=1300 audit(1747442168.674:565): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fff4b1c0da0 a2=0 a3=7fff4b1c0d8c items=0 ppid=2253 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:08.688120 kernel: audit: type=1327 audit(1747442168.674:565): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:08.674000 audit[5517]: NETFILTER_CFG table=nat:133 family=2 entries=106 op=nft_register_chain pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:36:08.674000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fff4b1c0da0 a2=0 a3=7fff4b1c0d8c items=0 ppid=2253 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:08.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:36:08.854674 kubelet[2108]: E0517 00:36:08.854634 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:36:09.052879 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:57054.service. May 17 00:36:09.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:57054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.057103 kernel: audit: type=1130 audit(1747442169.051:566): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:57054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.089000 audit[5519]: USER_ACCT pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.091003 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:09.092833 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:09.091000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.098707 systemd-logind[1292]: New session 23 of user core. May 17 00:36:09.099548 kernel: audit: type=1101 audit(1747442169.089:567): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.099633 kernel: audit: type=1103 audit(1747442169.091:568): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.099665 kernel: audit: type=1006 audit(1747442169.091:569): pid=5519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 17 00:36:09.099859 systemd[1]: Started session-23.scope. May 17 00:36:09.091000 audit[5519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc557bef50 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:09.091000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:09.105000 audit[5519]: USER_START pid=5519 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.106000 audit[5522]: CRED_ACQ pid=5522 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.209280 sshd[5519]: pam_unix(sshd:session): session closed for user core May 17 00:36:09.209000 audit[5519]: USER_END pid=5519 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.209000 audit[5519]: CRED_DISP pid=5519 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:09.212028 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:57054.service: Deactivated successfully. May 17 00:36:09.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:57054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:09.213321 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:36:09.213487 systemd-logind[1292]: Session 23 logged out. Waiting for processes to exit. May 17 00:36:09.214378 systemd-logind[1292]: Removed session 23. May 17 00:36:14.211779 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:57070.service. May 17 00:36:14.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:57070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.212916 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:36:14.212978 kernel: audit: type=1130 audit(1747442174.210:575): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:57070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:14.248000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.249506 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 57070 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:14.251376 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:14.250000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.256456 systemd[1]: Started session-24.scope. May 17 00:36:14.257260 kernel: audit: type=1101 audit(1747442174.248:576): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.257426 kernel: audit: type=1103 audit(1747442174.250:577): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.257461 kernel: audit: type=1006 audit(1747442174.250:578): pid=5533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 17 00:36:14.256941 systemd-logind[1292]: New session 24 of user core. May 17 00:36:14.263364 kernel: audit: type=1300 audit(1747442174.250:578): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0acdbde0 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:14.250000 audit[5533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0acdbde0 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:14.265039 kernel: audit: type=1327 audit(1747442174.250:578): proctitle=737368643A20636F7265205B707269765D May 17 00:36:14.250000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:14.261000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.262000 audit[5536]: CRED_ACQ pid=5536 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.272725 kernel: audit: type=1105 audit(1747442174.261:579): pid=5533 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.272795 kernel: audit: type=1103 audit(1747442174.262:580): pid=5536 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.379588 sshd[5533]: pam_unix(sshd:session): session closed for user core May 17 00:36:14.379000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.381751 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:57070.service: Deactivated successfully. May 17 00:36:14.382974 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:36:14.383524 systemd-logind[1292]: Session 24 logged out. Waiting for processes to exit. May 17 00:36:14.384395 systemd-logind[1292]: Removed session 24. May 17 00:36:14.379000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.388236 kernel: audit: type=1106 audit(1747442174.379:581): pid=5533 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.388306 kernel: audit: type=1104 audit(1747442174.379:582): pid=5533 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:14.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:57070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:15.855257 kubelet[2108]: E0517 00:36:15.855221 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:36:15.856154 kubelet[2108]: E0517 00:36:15.856132 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-zf9xd" podUID="915e2165-3634-409e-af91-ef9388cac59f" May 17 00:36:19.382706 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:34394.service. May 17 00:36:19.388636 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:36:19.388865 kernel: audit: type=1130 audit(1747442179.381:584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.116:22-10.0.0.1:34394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:19.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.116:22-10.0.0.1:34394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:19.439000 audit[5550]: USER_ACCT pid=5550 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.444213 kernel: audit: type=1101 audit(1747442179.439:585): pid=5550 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.441372 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:19.444514 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:19.439000 audit[5550]: CRED_ACQ pid=5550 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.447981 systemd[1]: Started session-25.scope. May 17 00:36:19.450701 kernel: audit: type=1103 audit(1747442179.439:586): pid=5550 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.450728 kernel: audit: type=1006 audit(1747442179.440:587): pid=5550 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 17 00:36:19.448858 systemd-logind[1292]: New session 25 of user core. May 17 00:36:19.455104 kernel: audit: type=1300 audit(1747442179.440:587): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1b406430 a2=3 a3=0 items=0 ppid=1 pid=5550 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:19.440000 audit[5550]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1b406430 a2=3 a3=0 items=0 ppid=1 pid=5550 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:19.440000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:19.452000 audit[5550]: USER_START pid=5550 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.460858 kernel: audit: type=1327 audit(1747442179.440:587): proctitle=737368643A20636F7265205B707269765D May 17 00:36:19.460902 kernel: audit: type=1105 audit(1747442179.452:588): pid=5550 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.460995 kernel: audit: type=1103 audit(1747442179.453:589): pid=5553 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.453000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.552578 sshd[5550]: pam_unix(sshd:session): session closed for user core May 17 00:36:19.553000 audit[5550]: USER_END pid=5550 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.555931 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:34394.service: Deactivated successfully. May 17 00:36:19.556693 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:36:19.553000 audit[5550]: CRED_DISP pid=5550 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.562316 kernel: audit: type=1106 audit(1747442179.553:590): pid=5550 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.562360 kernel: audit: type=1104 audit(1747442179.553:591): pid=5550 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:19.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.116:22-10.0.0.1:34394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:19.562988 systemd-logind[1292]: Session 25 logged out. Waiting for processes to exit. May 17 00:36:19.563731 systemd-logind[1292]: Removed session 25. May 17 00:36:20.856468 kubelet[2108]: E0517 00:36:20.856422 2108 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-55d44b8df7-v6qvx" podUID="664372a8-af86-4567-a233-d8be21950e7b" May 17 00:36:23.632288 systemd[1]: run-containerd-runc-k8s.io-6ecc41d797f3f2652720b5a6fc5cb7ed1917e51f760a28b40cfe89f5cee524e5-runc.r6opSs.mount: Deactivated successfully. May 17 00:36:24.560891 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:36:24.561002 kernel: audit: type=1130 audit(1747442184.554:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.116:22-10.0.0.1:34404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:24.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.116:22-10.0.0.1:34404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:36:24.555542 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:34404.service. May 17 00:36:24.590000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.591651 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 34404 ssh2: RSA SHA256:qUHWRKrHUGpvGAKaXIx4BM5iuCZcAPI02a20wC9hycU May 17 00:36:24.594000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.595691 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:36:24.600975 kernel: audit: type=1101 audit(1747442184.590:594): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.601137 kernel: audit: type=1103 audit(1747442184.594:595): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.601157 kernel: audit: type=1006 audit(1747442184.594:596): pid=5607 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 17 00:36:24.599692 systemd-logind[1292]: New session 26 of user core. May 17 00:36:24.599952 systemd[1]: Started session-26.scope. May 17 00:36:24.606629 kernel: audit: type=1300 audit(1747442184.594:596): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe708c4b90 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:24.606671 kernel: audit: type=1327 audit(1747442184.594:596): proctitle=737368643A20636F7265205B707269765D May 17 00:36:24.594000 audit[5607]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe708c4b90 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:36:24.594000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:36:24.611046 kernel: audit: type=1105 audit(1747442184.604:597): pid=5607 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.604000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.605000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.615096 kernel: audit: type=1103 audit(1747442184.605:598): pid=5610 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.706534 sshd[5607]: pam_unix(sshd:session): session closed for user core May 17 00:36:24.706000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.706000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.709291 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:34404.service: Deactivated successfully. May 17 00:36:24.710419 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:36:24.710970 systemd-logind[1292]: Session 26 logged out. Waiting for processes to exit. May 17 00:36:24.711970 systemd-logind[1292]: Removed session 26. May 17 00:36:24.715921 kernel: audit: type=1106 audit(1747442184.706:599): pid=5607 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.716003 kernel: audit: type=1104 audit(1747442184.706:600): pid=5607 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 17 00:36:24.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.116:22-10.0.0.1:34404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'