May 17 00:41:43.143668 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:41:43.143720 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:43.143746 kernel: BIOS-provided physical RAM map: May 17 00:41:43.143764 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved May 17 00:41:43.143785 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable May 17 00:41:43.143802 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved May 17 00:41:43.143831 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable May 17 00:41:43.143852 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved May 17 00:41:43.143871 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd277fff] usable May 17 00:41:43.143892 kernel: BIOS-e820: [mem 0x00000000bd278000-0x00000000bd281fff] ACPI data May 17 00:41:43.143910 kernel: BIOS-e820: [mem 0x00000000bd282000-0x00000000bf8ecfff] usable May 17 00:41:43.143931 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved May 17 00:41:43.143951 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data May 17 00:41:43.143970 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS May 17 00:41:43.144001 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable May 17 00:41:43.144023 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved May 17 00:41:43.144047 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable May 17 00:41:43.144064 kernel: NX (Execute Disable) protection: active May 17 00:41:43.144081 kernel: efi: EFI v2.70 by EDK II May 17 00:41:43.144103 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd278018 May 17 00:41:43.144125 kernel: random: crng init done May 17 00:41:43.144144 kernel: SMBIOS 2.4 present. May 17 00:41:43.144165 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 May 17 00:41:43.144180 kernel: Hypervisor detected: KVM May 17 00:41:43.144196 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:41:43.144214 kernel: kvm-clock: cpu 0, msr 20219a001, primary cpu clock May 17 00:41:43.144231 kernel: kvm-clock: using sched offset of 13651835838 cycles May 17 00:41:43.144247 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:41:43.144264 kernel: tsc: Detected 2299.998 MHz processor May 17 00:41:43.144280 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:41:43.144299 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:41:43.144317 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 May 17 00:41:43.144593 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:41:43.144617 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 May 17 00:41:43.144637 kernel: Using GB pages for direct mapping May 17 00:41:43.144656 kernel: Secure boot disabled May 17 00:41:43.144676 kernel: ACPI: Early table checksum verification disabled May 17 00:41:43.144697 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) May 17 00:41:43.144719 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) May 17 00:41:43.144738 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) May 17 00:41:43.144768 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) May 17 00:41:43.144792 kernel: ACPI: FACS 0x00000000BFBF2000 000040 May 17 00:41:43.144811 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) May 17 00:41:43.144833 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) May 17 00:41:43.144852 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) May 17 00:41:43.144871 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) May 17 00:41:43.144916 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) May 17 00:41:43.144943 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) May 17 00:41:43.144969 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] May 17 00:41:43.144988 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] May 17 00:41:43.145010 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] May 17 00:41:43.145034 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] May 17 00:41:43.145055 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] May 17 00:41:43.145074 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] May 17 00:41:43.145097 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] May 17 00:41:43.145123 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] May 17 00:41:43.145143 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] May 17 00:41:43.145165 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:41:43.145184 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:41:43.145206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:41:43.145225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] May 17 00:41:43.145263 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] May 17 00:41:43.145290 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] May 17 00:41:43.145313 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] May 17 00:41:43.145350 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] May 17 00:41:43.145373 kernel: Zone ranges: May 17 00:41:43.145395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:41:43.145416 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:41:43.145478 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] May 17 00:41:43.145500 kernel: Movable zone start for each node May 17 00:41:43.145520 kernel: Early memory node ranges May 17 00:41:43.145544 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] May 17 00:41:43.145562 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] May 17 00:41:43.145599 kernel: node 0: [mem 0x0000000000100000-0x00000000bd277fff] May 17 00:41:43.145620 kernel: node 0: [mem 0x00000000bd282000-0x00000000bf8ecfff] May 17 00:41:43.145640 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] May 17 00:41:43.145662 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] May 17 00:41:43.145683 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] May 17 00:41:43.145720 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:41:43.145743 kernel: On node 0, zone DMA: 11 pages in unavailable ranges May 17 00:41:43.145762 kernel: On node 0, zone DMA: 104 pages in unavailable ranges May 17 00:41:43.145780 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges May 17 00:41:43.145808 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges May 17 00:41:43.145840 kernel: On node 0, zone Normal: 32 pages in unavailable ranges May 17 00:41:43.145900 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:41:43.145922 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:41:43.145948 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:41:43.145968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:41:43.145987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:41:43.146010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:41:43.146029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:41:43.146054 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:41:43.146073 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:41:43.146093 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 17 00:41:43.146118 kernel: Booting paravirtualized kernel on KVM May 17 00:41:43.146142 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:41:43.146160 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:41:43.146183 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:41:43.146202 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:41:43.146223 kernel: pcpu-alloc: [0] 0 1 May 17 00:41:43.146245 kernel: kvm-guest: PV spinlocks enabled May 17 00:41:43.146268 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:41:43.146288 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 May 17 00:41:43.146311 kernel: Policy zone: Normal May 17 00:41:43.146363 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:43.146388 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:41:43.146405 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:41:43.146446 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:41:43.146468 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:41:43.146498 kernel: Memory: 7515412K/7860544K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 344872K reserved, 0K cma-reserved) May 17 00:41:43.146520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:41:43.146543 kernel: Kernel/User page tables isolation: enabled May 17 00:41:43.146565 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:41:43.146585 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:41:43.146606 kernel: rcu: Hierarchical RCU implementation. May 17 00:41:43.146629 kernel: rcu: RCU event tracing is enabled. May 17 00:41:43.146652 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:41:43.146680 kernel: Rude variant of Tasks RCU enabled. May 17 00:41:43.146719 kernel: Tracing variant of Tasks RCU enabled. May 17 00:41:43.146741 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:41:43.146768 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:41:43.146791 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:41:43.146815 kernel: Console: colour dummy device 80x25 May 17 00:41:43.146847 kernel: printk: console [ttyS0] enabled May 17 00:41:43.146868 kernel: ACPI: Core revision 20210730 May 17 00:41:43.146886 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:41:43.146904 kernel: x2apic enabled May 17 00:41:43.146925 kernel: Switched APIC routing to physical x2apic. May 17 00:41:43.146943 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 May 17 00:41:43.146966 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 17 00:41:43.146990 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) May 17 00:41:43.147012 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 May 17 00:41:43.147032 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 May 17 00:41:43.147052 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:41:43.147074 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 17 00:41:43.147092 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 17 00:41:43.147109 kernel: Spectre V2 : Mitigation: IBRS May 17 00:41:43.147128 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:41:43.147146 kernel: RETBleed: Mitigation: IBRS May 17 00:41:43.147164 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:41:43.147183 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl May 17 00:41:43.147201 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:41:43.147219 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:41:43.147241 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:41:43.147259 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:41:43.147278 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:41:43.147297 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:41:43.147315 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:41:43.147476 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:41:43.147516 kernel: Freeing SMP alternatives memory: 32K May 17 00:41:43.147541 kernel: pid_max: default: 32768 minimum: 301 May 17 00:41:43.147559 kernel: LSM: Security Framework initializing May 17 00:41:43.147584 kernel: SELinux: Initializing. May 17 00:41:43.147603 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:41:43.147620 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:41:43.147638 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) May 17 00:41:43.147657 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. May 17 00:41:43.147675 kernel: signal: max sigframe size: 1776 May 17 00:41:43.147693 kernel: rcu: Hierarchical SRCU implementation. May 17 00:41:43.147712 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:41:43.147731 kernel: smp: Bringing up secondary CPUs ... May 17 00:41:43.147754 kernel: x86: Booting SMP configuration: May 17 00:41:43.147772 kernel: .... node #0, CPUs: #1 May 17 00:41:43.147803 kernel: kvm-clock: cpu 1, msr 20219a041, secondary cpu clock May 17 00:41:43.147822 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:41:43.147843 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:41:43.147866 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:41:43.147888 kernel: smpboot: Max logical packages: 1 May 17 00:41:43.147906 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) May 17 00:41:43.147930 kernel: devtmpfs: initialized May 17 00:41:43.147950 kernel: x86/mm: Memory block size: 128MB May 17 00:41:43.147972 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) May 17 00:41:43.147991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:41:43.148010 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:41:43.148028 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:41:43.148047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:41:43.148066 kernel: audit: initializing netlink subsys (disabled) May 17 00:41:43.148085 kernel: audit: type=2000 audit(1747442501.871:1): state=initialized audit_enabled=0 res=1 May 17 00:41:43.148108 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:41:43.148127 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:41:43.148147 kernel: cpuidle: using governor menu May 17 00:41:43.148167 kernel: ACPI: bus type PCI registered May 17 00:41:43.148188 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:41:43.148207 kernel: dca service started, version 1.12.1 May 17 00:41:43.148227 kernel: PCI: Using configuration type 1 for base access May 17 00:41:43.148247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:41:43.148267 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:41:43.148290 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:41:43.148309 kernel: ACPI: Added _OSI(Module Device) May 17 00:41:43.148328 kernel: ACPI: Added _OSI(Processor Device) May 17 00:41:43.148347 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:41:43.148375 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:41:43.148396 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:41:43.148416 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:41:43.151655 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:41:43.151680 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:41:43.151712 kernel: ACPI: Interpreter enabled May 17 00:41:43.151737 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:41:43.151761 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:41:43.151781 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:41:43.151802 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F May 17 00:41:43.151822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:41:43.152125 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:41:43.152350 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:41:43.152403 kernel: PCI host bridge to bus 0000:00 May 17 00:41:43.152642 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:41:43.152851 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:41:43.153039 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:41:43.153227 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] May 17 00:41:43.153435 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:41:43.153665 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:41:43.153911 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 May 17 00:41:43.154143 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:41:43.154349 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:41:43.161367 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 May 17 00:41:43.161655 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] May 17 00:41:43.161839 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] May 17 00:41:43.162040 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:43.162245 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] May 17 00:41:43.162477 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] May 17 00:41:43.162669 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:41:43.162849 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] May 17 00:41:43.163025 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] May 17 00:41:43.163048 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:41:43.163074 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:41:43.163093 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:41:43.163113 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:41:43.163132 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:41:43.163152 kernel: iommu: Default domain type: Translated May 17 00:41:43.163171 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:41:43.163190 kernel: vgaarb: loaded May 17 00:41:43.163210 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:41:43.163229 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:41:43.163253 kernel: PTP clock support registered May 17 00:41:43.163272 kernel: Registered efivars operations May 17 00:41:43.163291 kernel: PCI: Using ACPI for IRQ routing May 17 00:41:43.163310 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:41:43.163329 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] May 17 00:41:43.163347 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] May 17 00:41:43.163366 kernel: e820: reserve RAM buffer [mem 0xbd278000-0xbfffffff] May 17 00:41:43.163395 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] May 17 00:41:43.163414 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] May 17 00:41:43.163466 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:41:43.163483 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:41:43.163504 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:41:43.163527 kernel: pnp: PnP ACPI init May 17 00:41:43.163550 kernel: pnp: PnP ACPI: found 7 devices May 17 00:41:43.163573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:41:43.163594 kernel: NET: Registered PF_INET protocol family May 17 00:41:43.163618 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:41:43.163715 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:41:43.163783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:41:43.163809 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:41:43.163831 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:41:43.163855 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:41:43.163876 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:41:43.163898 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:41:43.163922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:41:43.163955 kernel: NET: Registered PF_XDP protocol family May 17 00:41:43.164216 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:41:43.164471 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:41:43.164668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:41:43.164856 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] May 17 00:41:43.165072 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:41:43.165103 kernel: PCI: CLS 0 bytes, default 64 May 17 00:41:43.165127 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:41:43.165156 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) May 17 00:41:43.165178 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:41:43.165202 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns May 17 00:41:43.165225 kernel: clocksource: Switched to clocksource tsc May 17 00:41:43.165247 kernel: Initialise system trusted keyrings May 17 00:41:43.165269 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:41:43.165291 kernel: Key type asymmetric registered May 17 00:41:43.165314 kernel: Asymmetric key parser 'x509' registered May 17 00:41:43.165336 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:41:43.165369 kernel: io scheduler mq-deadline registered May 17 00:41:43.165393 kernel: io scheduler kyber registered May 17 00:41:43.165414 kernel: io scheduler bfq registered May 17 00:41:43.165453 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:41:43.165475 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:41:43.165848 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver May 17 00:41:43.165881 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 May 17 00:41:43.166124 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver May 17 00:41:43.166155 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:41:43.166453 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver May 17 00:41:43.166501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:41:43.166526 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:41:43.166552 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:41:43.166575 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A May 17 00:41:43.166595 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A May 17 00:41:43.166866 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) May 17 00:41:43.166903 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:41:43.166930 kernel: i8042: Warning: Keylock active May 17 00:41:43.166948 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:41:43.166969 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:41:43.167172 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:41:43.167370 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:41:43.167595 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:41:42 UTC (1747442502) May 17 00:41:43.167783 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:41:43.167813 kernel: intel_pstate: CPU model not supported May 17 00:41:43.167841 kernel: pstore: Registered efi as persistent store backend May 17 00:41:43.167864 kernel: NET: Registered PF_INET6 protocol family May 17 00:41:43.167886 kernel: Segment Routing with IPv6 May 17 00:41:43.167903 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:41:43.167924 kernel: NET: Registered PF_PACKET protocol family May 17 00:41:43.167946 kernel: Key type dns_resolver registered May 17 00:41:43.167969 kernel: IPI shorthand broadcast: enabled May 17 00:41:43.167994 kernel: sched_clock: Marking stable (771213514, 145943848)->(961015128, -43857766) May 17 00:41:43.168014 kernel: registered taskstats version 1 May 17 00:41:43.168042 kernel: Loading compiled-in X.509 certificates May 17 00:41:43.168062 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:41:43.168088 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:41:43.168107 kernel: Key type .fscrypt registered May 17 00:41:43.168132 kernel: Key type fscrypt-provisioning registered May 17 00:41:43.168153 kernel: pstore: Using crash dump compression: deflate May 17 00:41:43.168177 kernel: ima: Allocated hash algorithm: sha1 May 17 00:41:43.168199 kernel: ima: No architecture policies found May 17 00:41:43.168218 kernel: clk: Disabling unused clocks May 17 00:41:43.168241 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:41:43.168266 kernel: Write protecting the kernel read-only data: 28672k May 17 00:41:43.168289 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:41:43.168311 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:41:43.168336 kernel: Run /init as init process May 17 00:41:43.168370 kernel: with arguments: May 17 00:41:43.168390 kernel: /init May 17 00:41:43.168414 kernel: with environment: May 17 00:41:43.168541 kernel: HOME=/ May 17 00:41:43.168567 kernel: TERM=linux May 17 00:41:43.168594 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:41:43.168621 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:43.168647 systemd[1]: Detected virtualization kvm. May 17 00:41:43.168673 systemd[1]: Detected architecture x86-64. May 17 00:41:43.168695 systemd[1]: Running in initrd. May 17 00:41:43.168719 systemd[1]: No hostname configured, using default hostname. May 17 00:41:43.168746 systemd[1]: Hostname set to . May 17 00:41:43.168769 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:43.168796 systemd[1]: Queued start job for default target initrd.target. May 17 00:41:43.168816 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:43.168836 systemd[1]: Reached target cryptsetup.target. May 17 00:41:43.168857 systemd[1]: Reached target paths.target. May 17 00:41:43.168880 systemd[1]: Reached target slices.target. May 17 00:41:43.168906 systemd[1]: Reached target swap.target. May 17 00:41:43.168931 systemd[1]: Reached target timers.target. May 17 00:41:43.168959 systemd[1]: Listening on iscsid.socket. May 17 00:41:43.168981 systemd[1]: Listening on iscsiuio.socket. May 17 00:41:43.169007 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:41:43.169029 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:41:43.169054 systemd[1]: Listening on systemd-journald.socket. May 17 00:41:43.169080 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:43.169103 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:43.169127 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:43.169149 systemd[1]: Reached target sockets.target. May 17 00:41:43.169195 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:43.169224 systemd[1]: Finished network-cleanup.service. May 17 00:41:43.169246 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:41:43.169273 systemd[1]: Starting systemd-journald.service... May 17 00:41:43.169297 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:43.169324 systemd[1]: Starting systemd-resolved.service... May 17 00:41:43.169356 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:41:43.169379 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:43.169404 kernel: audit: type=1130 audit(1747442503.153:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.180624 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:41:43.180665 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:43.180692 kernel: audit: type=1130 audit(1747442503.168:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.180733 systemd-journald[189]: Journal started May 17 00:41:43.180872 systemd-journald[189]: Runtime Journal (/run/log/journal/fec82ec59c58d0c7e7cbdabe7b2e0c78) is 8.0M, max 148.8M, 140.8M free. May 17 00:41:43.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.192222 systemd[1]: Started systemd-journald.service. May 17 00:41:43.185365 systemd-modules-load[190]: Inserted module 'overlay' May 17 00:41:43.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.203472 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:41:43.233676 kernel: audit: type=1130 audit(1747442503.201:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.233865 kernel: audit: type=1130 audit(1747442503.212:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.233895 kernel: audit: type=1130 audit(1747442503.218:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.213885 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:43.222014 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:41:43.253004 systemd-resolved[191]: Positive Trust Anchors: May 17 00:41:43.254957 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:43.255928 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:43.267291 systemd-resolved[191]: Defaulting to hostname 'linux'. May 17 00:41:43.270619 systemd[1]: Started systemd-resolved.service. May 17 00:41:43.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.271352 systemd[1]: Reached target nss-lookup.target. May 17 00:41:43.276710 kernel: audit: type=1130 audit(1747442503.269:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.282631 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:41:43.294696 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:41:43.294901 kernel: audit: type=1130 audit(1747442503.285:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.288267 systemd[1]: Starting dracut-cmdline.service... May 17 00:41:43.306573 systemd-modules-load[190]: Inserted module 'br_netfilter' May 17 00:41:43.310569 kernel: Bridge firewalling registered May 17 00:41:43.310607 dracut-cmdline[205]: dracut-dracut-053 May 17 00:41:43.314560 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:43.342459 kernel: SCSI subsystem initialized May 17 00:41:43.364855 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:41:43.364937 kernel: device-mapper: uevent: version 1.0.3 May 17 00:41:43.367828 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:41:43.373160 systemd-modules-load[190]: Inserted module 'dm_multipath' May 17 00:41:43.384021 kernel: audit: type=1130 audit(1747442503.377:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.374446 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:43.380082 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:43.400296 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:43.415608 kernel: audit: type=1130 audit(1747442503.404:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.432473 kernel: Loading iSCSI transport class v2.0-870. May 17 00:41:43.453469 kernel: iscsi: registered transport (tcp) May 17 00:41:43.482465 kernel: iscsi: registered transport (qla4xxx) May 17 00:41:43.482549 kernel: QLogic iSCSI HBA Driver May 17 00:41:43.534601 systemd[1]: Finished dracut-cmdline.service. May 17 00:41:43.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:43.536004 systemd[1]: Starting dracut-pre-udev.service... May 17 00:41:43.600470 kernel: raid6: avx2x4 gen() 17992 MB/s May 17 00:41:43.621466 kernel: raid6: avx2x4 xor() 7930 MB/s May 17 00:41:43.642454 kernel: raid6: avx2x2 gen() 17907 MB/s May 17 00:41:43.663464 kernel: raid6: avx2x2 xor() 17605 MB/s May 17 00:41:43.684459 kernel: raid6: avx2x1 gen() 14119 MB/s May 17 00:41:43.705465 kernel: raid6: avx2x1 xor() 15479 MB/s May 17 00:41:43.726460 kernel: raid6: sse2x4 gen() 10746 MB/s May 17 00:41:43.747459 kernel: raid6: sse2x4 xor() 6604 MB/s May 17 00:41:43.768460 kernel: raid6: sse2x2 gen() 11764 MB/s May 17 00:41:43.789466 kernel: raid6: sse2x2 xor() 7252 MB/s May 17 00:41:43.810464 kernel: raid6: sse2x1 gen() 10171 MB/s May 17 00:41:43.836525 kernel: raid6: sse2x1 xor() 5019 MB/s May 17 00:41:43.836624 kernel: raid6: using algorithm avx2x4 gen() 17992 MB/s May 17 00:41:43.836656 kernel: raid6: .... xor() 7930 MB/s, rmw enabled May 17 00:41:43.841618 kernel: raid6: using avx2x2 recovery algorithm May 17 00:41:43.867474 kernel: xor: automatically using best checksumming function avx May 17 00:41:43.993477 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:41:44.006725 systemd[1]: Finished dracut-pre-udev.service. May 17 00:41:44.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:44.006000 audit: BPF prog-id=7 op=LOAD May 17 00:41:44.006000 audit: BPF prog-id=8 op=LOAD May 17 00:41:44.008241 systemd[1]: Starting systemd-udevd.service... May 17 00:41:44.027278 systemd-udevd[388]: Using default interface naming scheme 'v252'. May 17 00:41:44.044849 systemd[1]: Started systemd-udevd.service. May 17 00:41:44.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:44.055316 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:41:44.072238 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation May 17 00:41:44.113964 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:41:44.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:44.115311 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:44.194648 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:44.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:44.290448 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:41:44.325457 kernel: scsi host0: Virtio SCSI HBA May 17 00:41:44.348453 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 May 17 00:41:44.400137 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:41:44.400217 kernel: AES CTR mode by8 optimization enabled May 17 00:41:44.480066 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) May 17 00:41:44.539744 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks May 17 00:41:44.539939 kernel: sd 0:0:1:0: [sda] Write Protect is off May 17 00:41:44.540128 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 May 17 00:41:44.540296 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:41:44.540525 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:41:44.540556 kernel: GPT:17805311 != 25165823 May 17 00:41:44.540582 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:41:44.540613 kernel: GPT:17805311 != 25165823 May 17 00:41:44.540639 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:41:44.540661 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:41:44.540692 kernel: sd 0:0:1:0: [sda] Attached SCSI disk May 17 00:41:44.603458 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (442) May 17 00:41:44.606252 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:41:44.616724 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:41:44.644817 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:41:44.669972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:41:44.679733 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:44.696882 systemd[1]: Starting disk-uuid.service... May 17 00:41:44.731463 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:41:44.732051 disk-uuid[507]: Primary Header is updated. May 17 00:41:44.732051 disk-uuid[507]: Secondary Entries is updated. May 17 00:41:44.732051 disk-uuid[507]: Secondary Header is updated. May 17 00:41:45.769172 disk-uuid[508]: The operation has completed successfully. May 17 00:41:45.777586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:41:45.843132 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:41:45.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:45.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:45.843308 systemd[1]: Finished disk-uuid.service. May 17 00:41:45.866762 systemd[1]: Starting verity-setup.service... May 17 00:41:45.895485 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:41:45.981481 systemd[1]: Found device dev-mapper-usr.device. May 17 00:41:45.993303 systemd[1]: Mounting sysusr-usr.mount... May 17 00:41:46.006001 systemd[1]: Finished verity-setup.service. May 17 00:41:46.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.108466 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:41:46.109142 systemd[1]: Mounted sysusr-usr.mount. May 17 00:41:46.109620 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:41:46.166611 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:46.166663 kernel: BTRFS info (device sda6): using free space tree May 17 00:41:46.166692 kernel: BTRFS info (device sda6): has skinny extents May 17 00:41:46.166722 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:41:46.110644 systemd[1]: Starting ignition-setup.service... May 17 00:41:46.149414 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:41:46.181034 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:41:46.200387 systemd[1]: Finished ignition-setup.service. May 17 00:41:46.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.201978 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:41:46.286277 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:41:46.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.295000 audit: BPF prog-id=9 op=LOAD May 17 00:41:46.298358 systemd[1]: Starting systemd-networkd.service... May 17 00:41:46.337013 systemd-networkd[682]: lo: Link UP May 17 00:41:46.337032 systemd-networkd[682]: lo: Gained carrier May 17 00:41:46.338646 systemd-networkd[682]: Enumeration completed May 17 00:41:46.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.338812 systemd[1]: Started systemd-networkd.service. May 17 00:41:46.339533 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:41:46.343868 systemd-networkd[682]: eth0: Link UP May 17 00:41:46.343878 systemd-networkd[682]: eth0: Gained carrier May 17 00:41:46.353630 systemd-networkd[682]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:41:46.353656 systemd-networkd[682]: eth0: DHCPv4 address 10.128.0.56/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 17 00:41:46.357873 systemd[1]: Reached target network.target. May 17 00:41:46.373854 systemd[1]: Starting iscsiuio.service... May 17 00:41:46.383548 systemd[1]: Started iscsiuio.service. May 17 00:41:46.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.466802 systemd[1]: Starting iscsid.service... May 17 00:41:46.480601 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:46.480601 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:41:46.480601 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:41:46.480601 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:41:46.480601 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:41:46.480601 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:46.480601 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:41:46.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.497055 ignition[606]: Ignition 2.14.0 May 17 00:41:46.487856 systemd[1]: Started iscsid.service. May 17 00:41:46.497070 ignition[606]: Stage: fetch-offline May 17 00:41:46.500808 systemd[1]: Starting dracut-initqueue.service... May 17 00:41:46.497159 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:46.531136 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:41:46.497208 ignition[606]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:46.571089 systemd[1]: Finished dracut-initqueue.service. May 17 00:41:46.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.525913 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:46.580934 systemd[1]: Reached target remote-fs-pre.target. May 17 00:41:46.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.526122 ignition[606]: parsed url from cmdline: "" May 17 00:41:46.598768 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:46.526129 ignition[606]: no config URL provided May 17 00:41:46.624738 systemd[1]: Reached target remote-fs.target. May 17 00:41:46.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.526143 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:46.644991 systemd[1]: Starting dracut-pre-mount.service... May 17 00:41:46.526155 ignition[606]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:46.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:46.666751 systemd[1]: Starting ignition-fetch.service... May 17 00:41:46.526165 ignition[606]: failed to fetch config: resource requires networking May 17 00:41:46.685274 systemd[1]: Finished dracut-pre-mount.service. May 17 00:41:46.526641 ignition[606]: Ignition finished successfully May 17 00:41:46.707805 unknown[707]: fetched base config from "system" May 17 00:41:46.680035 ignition[707]: Ignition 2.14.0 May 17 00:41:46.707819 unknown[707]: fetched base config from "system" May 17 00:41:46.680048 ignition[707]: Stage: fetch May 17 00:41:46.707832 unknown[707]: fetched user config from "gcp" May 17 00:41:46.680191 ignition[707]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:46.710884 systemd[1]: Finished ignition-fetch.service. May 17 00:41:46.680227 ignition[707]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:46.721264 systemd[1]: Starting ignition-kargs.service... May 17 00:41:46.691034 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:46.751097 systemd[1]: Finished ignition-kargs.service. May 17 00:41:46.691339 ignition[707]: parsed url from cmdline: "" May 17 00:41:46.759236 systemd[1]: Starting ignition-disks.service... May 17 00:41:46.691349 ignition[707]: no config URL provided May 17 00:41:46.782127 systemd[1]: Finished ignition-disks.service. May 17 00:41:46.691360 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:46.802933 systemd[1]: Reached target initrd-root-device.target. May 17 00:41:46.691376 ignition[707]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:46.824777 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:46.691417 ignition[707]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 May 17 00:41:46.834805 systemd[1]: Reached target local-fs.target. May 17 00:41:46.701007 ignition[707]: GET result: OK May 17 00:41:46.848848 systemd[1]: Reached target sysinit.target. May 17 00:41:46.701117 ignition[707]: parsing config with SHA512: 950ccd78ef0372fb26bce21f352bc2cc24c44622aad3ec4a2a7518539f60abc6192b6d9bbd366160a0b4d4dbab6d3a8c24ad9a72710db21cb06f10d986e6bf0b May 17 00:41:46.870752 systemd[1]: Reached target basic.target. May 17 00:41:46.708715 ignition[707]: fetch: fetch complete May 17 00:41:46.885926 systemd[1]: Starting systemd-fsck-root.service... May 17 00:41:46.708722 ignition[707]: fetch: fetch passed May 17 00:41:46.708771 ignition[707]: Ignition finished successfully May 17 00:41:46.735549 ignition[713]: Ignition 2.14.0 May 17 00:41:46.735563 ignition[713]: Stage: kargs May 17 00:41:46.735765 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:46.735805 ignition[713]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:46.744325 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:46.746024 ignition[713]: kargs: kargs passed May 17 00:41:46.746075 ignition[713]: Ignition finished successfully May 17 00:41:46.770582 ignition[719]: Ignition 2.14.0 May 17 00:41:46.770593 ignition[719]: Stage: disks May 17 00:41:46.770790 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:46.770838 ignition[719]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:46.778526 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:46.779864 ignition[719]: disks: disks passed May 17 00:41:46.779915 ignition[719]: Ignition finished successfully May 17 00:41:46.931078 systemd-fsck[727]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 17 00:41:47.114658 systemd[1]: Finished systemd-fsck-root.service. May 17 00:41:47.156641 kernel: kauditd_printk_skb: 22 callbacks suppressed May 17 00:41:47.156708 kernel: audit: type=1130 audit(1747442507.122:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.151969 systemd[1]: Mounting sysroot.mount... May 17 00:41:47.179517 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:41:47.181006 systemd[1]: Mounted sysroot.mount. May 17 00:41:47.181373 systemd[1]: Reached target initrd-root-fs.target. May 17 00:41:47.202873 systemd[1]: Mounting sysroot-usr.mount... May 17 00:41:47.210286 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:41:47.210336 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:41:47.210371 systemd[1]: Reached target ignition-diskful.target. May 17 00:41:47.284407 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (733) May 17 00:41:47.222261 systemd[1]: Mounted sysroot-usr.mount. May 17 00:41:47.311174 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:47.311235 kernel: BTRFS info (device sda6): using free space tree May 17 00:41:47.311267 kernel: BTRFS info (device sda6): has skinny extents May 17 00:41:47.255351 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:41:47.319612 initrd-setup-root[738]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:41:47.335585 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:41:47.271048 systemd[1]: Starting initrd-setup-root.service... May 17 00:41:47.344676 initrd-setup-root[746]: cut: /sysroot/etc/group: No such file or directory May 17 00:41:47.354598 initrd-setup-root[770]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:41:47.371586 initrd-setup-root[780]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:41:47.365057 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:41:47.415391 systemd[1]: Finished initrd-setup-root.service. May 17 00:41:47.449638 kernel: audit: type=1130 audit(1747442507.414:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.417235 systemd[1]: Starting ignition-mount.service... May 17 00:41:47.457758 systemd[1]: Starting sysroot-boot.service... May 17 00:41:47.472308 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:41:47.472522 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:41:47.498649 ignition[799]: INFO : Ignition 2.14.0 May 17 00:41:47.498649 ignition[799]: INFO : Stage: mount May 17 00:41:47.498649 ignition[799]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:47.498649 ignition[799]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:47.632617 kernel: audit: type=1130 audit(1747442507.514:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.632677 kernel: audit: type=1130 audit(1747442507.541:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.632723 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (808) May 17 00:41:47.632753 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:47.632783 kernel: BTRFS info (device sda6): using free space tree May 17 00:41:47.632800 kernel: BTRFS info (device sda6): has skinny extents May 17 00:41:47.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:47.503782 systemd[1]: Finished sysroot-boot.service. May 17 00:41:47.652741 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:41:47.652819 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:47.652819 ignition[799]: INFO : mount: mount passed May 17 00:41:47.652819 ignition[799]: INFO : Ignition finished successfully May 17 00:41:47.516125 systemd[1]: Finished ignition-mount.service. May 17 00:41:47.544682 systemd[1]: Starting ignition-files.service... May 17 00:41:47.702577 ignition[827]: INFO : Ignition 2.14.0 May 17 00:41:47.702577 ignition[827]: INFO : Stage: files May 17 00:41:47.702577 ignition[827]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:47.702577 ignition[827]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:47.702577 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:47.702577 ignition[827]: DEBUG : files: compiled without relabeling support, skipping May 17 00:41:47.702577 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:41:47.702577 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:41:47.584323 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:41:47.804590 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:41:47.804590 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:41:47.804590 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:41:47.804590 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:41:47.804590 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:41:47.804590 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:41:47.804590 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:41:47.648910 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:41:47.911589 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:41:47.706825 unknown[827]: wrote ssh authorized keys file for user: core May 17 00:41:48.157607 systemd-networkd[682]: eth0: Gained IPv6LL May 17 00:41:48.435315 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789816244" May 17 00:41:48.452591 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789816244": device or resource busy May 17 00:41:48.452591 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem789816244", trying btrfs: device or resource busy May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789816244" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem789816244" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem789816244" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem789816244" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 17 00:41:48.452591 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:41:48.452333 systemd[1]: mnt-oem789816244.mount: Deactivated successfully. May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3283384424" May 17 00:41:48.685594 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3283384424": device or resource busy May 17 00:41:48.685594 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3283384424", trying btrfs: device or resource busy May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3283384424" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3283384424" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem3283384424" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem3283384424" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/install.sh" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:48.685594 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:48.474042 systemd[1]: mnt-oem3283384424.mount: Deactivated successfully. May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602535042" May 17 00:41:48.938622 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602535042": device or resource busy May 17 00:41:48.938622 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3602535042", trying btrfs: device or resource busy May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602535042" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3602535042" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3602535042" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3602535042" May 17 00:41:48.938622 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" May 17 00:41:49.173637 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:49.173637 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:41:49.173637 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK May 17 00:41:49.487935 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem598420933" May 17 00:41:49.506596 ignition[827]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem598420933": device or resource busy May 17 00:41:49.506596 ignition[827]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem598420933", trying btrfs: device or resource busy May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem598420933" May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem598420933" May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem598420933" May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem598420933" May 17 00:41:49.506596 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1d): [started] processing unit "oem-gce.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" May 17 00:41:49.506596 ignition[827]: INFO : files: op(1f): [started] processing unit "containerd.service" May 17 00:41:49.959744 kernel: audit: type=1130 audit(1747442509.546:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.959789 kernel: audit: type=1130 audit(1747442509.633:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.959809 kernel: audit: type=1130 audit(1747442509.678:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.959826 kernel: audit: type=1131 audit(1747442509.678:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.959851 kernel: audit: type=1130 audit(1747442509.809:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.959868 kernel: audit: type=1131 audit(1747442509.831:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.512819 systemd[1]: mnt-oem598420933.mount: Deactivated successfully. May 17 00:41:49.976758 ignition[827]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:41:49.976758 ignition[827]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:41:49.976758 ignition[827]: INFO : files: op(1f): [finished] processing unit "containerd.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(21): [started] processing unit "prepare-helm.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:49.976758 ignition[827]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:49.976758 ignition[827]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(25): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(25): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" May 17 00:41:49.976758 ignition[827]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:41:49.976758 ignition[827]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:49.976758 ignition[827]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:49.976758 ignition[827]: INFO : files: files passed May 17 00:41:49.976758 ignition[827]: INFO : Ignition finished successfully May 17 00:41:50.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.529413 systemd[1]: Finished ignition-files.service. May 17 00:41:50.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.397845 initrd-setup-root-after-ignition[850]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:41:49.561282 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:41:49.584966 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:41:49.586338 systemd[1]: Starting ignition-quench.service... May 17 00:41:50.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.610167 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:41:49.635379 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:41:49.635602 systemd[1]: Finished ignition-quench.service. May 17 00:41:50.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.680079 systemd[1]: Reached target ignition-complete.target. May 17 00:41:50.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.516851 ignition[865]: INFO : Ignition 2.14.0 May 17 00:41:50.516851 ignition[865]: INFO : Stage: umount May 17 00:41:50.516851 ignition[865]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:50.516851 ignition[865]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 May 17 00:41:50.516851 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" May 17 00:41:50.516851 ignition[865]: INFO : umount: umount passed May 17 00:41:50.516851 ignition[865]: INFO : Ignition finished successfully May 17 00:41:50.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.744343 systemd[1]: Starting initrd-parse-etc.service... May 17 00:41:49.786042 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:41:49.786185 systemd[1]: Finished initrd-parse-etc.service. May 17 00:41:49.832478 systemd[1]: Reached target initrd-fs.target. May 17 00:41:49.881756 systemd[1]: Reached target initrd.target. May 17 00:41:49.909853 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:41:49.911332 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:41:50.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.927799 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:41:50.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:49.950187 systemd[1]: Starting initrd-cleanup.service... May 17 00:41:49.975445 systemd[1]: Stopped target nss-lookup.target. May 17 00:41:49.984818 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:41:50.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.012904 systemd[1]: Stopped target timers.target. May 17 00:41:50.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.039908 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:41:50.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.820000 audit: BPF prog-id=6 op=UNLOAD May 17 00:41:50.040123 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:41:50.059132 systemd[1]: Stopped target initrd.target. May 17 00:41:50.076865 systemd[1]: Stopped target basic.target. May 17 00:41:50.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.100923 systemd[1]: Stopped target ignition-complete.target. May 17 00:41:50.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.126886 systemd[1]: Stopped target ignition-diskful.target. May 17 00:41:50.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.145861 systemd[1]: Stopped target initrd-root-device.target. May 17 00:41:50.167912 systemd[1]: Stopped target remote-fs.target. May 17 00:41:50.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.190960 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:41:50.211881 systemd[1]: Stopped target sysinit.target. May 17 00:41:50.232920 systemd[1]: Stopped target local-fs.target. May 17 00:41:50.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.255936 systemd[1]: Stopped target local-fs-pre.target. May 17 00:41:50.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.277947 systemd[1]: Stopped target swap.target. May 17 00:41:51.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.297807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:41:50.298063 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:41:50.311192 systemd[1]: Stopped target cryptsetup.target. May 17 00:41:51.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.331950 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:41:51.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.332164 systemd[1]: Stopped dracut-initqueue.service. May 17 00:41:51.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:51.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:50.353198 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:41:50.353413 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:41:51.116000 audit: BPF prog-id=5 op=UNLOAD May 17 00:41:51.116000 audit: BPF prog-id=4 op=UNLOAD May 17 00:41:51.116000 audit: BPF prog-id=3 op=UNLOAD May 17 00:41:51.117000 audit: BPF prog-id=8 op=UNLOAD May 17 00:41:51.117000 audit: BPF prog-id=7 op=UNLOAD May 17 00:41:50.367190 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:41:50.367385 systemd[1]: Stopped ignition-files.service. May 17 00:41:51.150728 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). May 17 00:41:50.392870 systemd[1]: Stopping ignition-mount.service... May 17 00:41:51.158634 iscsid[693]: iscsid shutting down. May 17 00:41:50.427628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:41:50.428065 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:41:50.454412 systemd[1]: Stopping sysroot-boot.service... May 17 00:41:50.466824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:41:50.467269 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:41:50.492934 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:41:50.493158 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:41:50.513235 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:41:50.514570 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:41:50.514701 systemd[1]: Stopped ignition-mount.service. May 17 00:41:50.525315 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:41:50.525476 systemd[1]: Stopped sysroot-boot.service. May 17 00:41:50.539399 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:41:50.539575 systemd[1]: Stopped ignition-disks.service. May 17 00:41:50.553706 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:41:50.553806 systemd[1]: Stopped ignition-kargs.service. May 17 00:41:50.572751 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:41:50.572842 systemd[1]: Stopped ignition-fetch.service. May 17 00:41:50.600742 systemd[1]: Stopped target network.target. May 17 00:41:50.619612 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:41:50.619747 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:41:50.634742 systemd[1]: Stopped target paths.target. May 17 00:41:50.648594 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:41:50.652528 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:41:50.664594 systemd[1]: Stopped target slices.target. May 17 00:41:50.678601 systemd[1]: Stopped target sockets.target. May 17 00:41:50.691693 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:41:50.691772 systemd[1]: Closed iscsid.socket. May 17 00:41:50.705684 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:41:50.705756 systemd[1]: Closed iscsiuio.socket. May 17 00:41:50.719667 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:41:50.719814 systemd[1]: Stopped ignition-setup.service. May 17 00:41:50.738697 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:41:50.738821 systemd[1]: Stopped initrd-setup-root.service. May 17 00:41:50.754930 systemd[1]: Stopping systemd-networkd.service... May 17 00:41:50.758500 systemd-networkd[682]: eth0: DHCPv6 lease lost May 17 00:41:51.166000 audit: BPF prog-id=9 op=UNLOAD May 17 00:41:50.769916 systemd[1]: Stopping systemd-resolved.service... May 17 00:41:50.777539 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:41:50.777677 systemd[1]: Stopped systemd-resolved.service. May 17 00:41:50.790612 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:41:50.790808 systemd[1]: Stopped systemd-networkd.service. May 17 00:41:50.813695 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:41:50.813831 systemd[1]: Finished initrd-cleanup.service. May 17 00:41:50.822076 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:41:50.822263 systemd[1]: Closed systemd-networkd.socket. May 17 00:41:50.844816 systemd[1]: Stopping network-cleanup.service... May 17 00:41:50.850802 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:41:50.850917 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:41:50.871938 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:41:50.872016 systemd[1]: Stopped systemd-sysctl.service. May 17 00:41:50.886978 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:41:50.887052 systemd[1]: Stopped systemd-modules-load.service. May 17 00:41:50.904051 systemd[1]: Stopping systemd-udevd.service... May 17 00:41:50.920511 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:41:50.921221 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:41:50.921393 systemd[1]: Stopped systemd-udevd.service. May 17 00:41:50.929491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:41:50.929592 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:41:50.948681 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:41:50.948755 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:41:50.964835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:41:50.964938 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:41:50.979986 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:41:50.980087 systemd[1]: Stopped dracut-cmdline.service. May 17 00:41:50.994905 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:41:50.994979 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:41:51.010970 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:41:51.033574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:41:51.033722 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:41:51.041494 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:41:51.041639 systemd[1]: Stopped network-cleanup.service. May 17 00:41:51.063147 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:41:51.063288 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:41:51.078937 systemd[1]: Reached target initrd-switch-root.target. May 17 00:41:51.095001 systemd[1]: Starting initrd-switch-root.service... May 17 00:41:51.114172 systemd[1]: Switching root. May 17 00:41:51.170187 systemd-journald[189]: Journal stopped May 17 00:41:56.086517 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:41:56.086666 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:41:56.086726 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:41:56.086757 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:41:56.086791 kernel: SELinux: policy capability open_perms=1 May 17 00:41:56.086830 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:41:56.086861 kernel: SELinux: policy capability always_check_network=0 May 17 00:41:56.086898 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:41:56.086936 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:41:56.086994 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:41:56.087031 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:41:56.087063 systemd[1]: Successfully loaded SELinux policy in 185.457ms. May 17 00:41:56.087122 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.400ms. May 17 00:41:56.087157 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:56.087196 systemd[1]: Detected virtualization kvm. May 17 00:41:56.087236 systemd[1]: Detected architecture x86-64. May 17 00:41:56.087267 systemd[1]: Detected first boot. May 17 00:41:56.087301 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:56.087337 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:41:56.087370 systemd[1]: Populated /etc with preset unit settings. May 17 00:41:56.087402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:56.087461 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:56.087496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:56.087537 systemd[1]: Queued start job for default target multi-user.target. May 17 00:41:56.087571 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:41:56.087610 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:41:56.087651 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:41:56.087693 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:41:56.087725 systemd[1]: Created slice system-getty.slice. May 17 00:41:56.087757 systemd[1]: Created slice system-modprobe.slice. May 17 00:41:56.087788 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:41:56.087822 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:41:56.087855 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:41:56.087892 systemd[1]: Created slice user.slice. May 17 00:41:56.087935 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:56.087968 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:41:56.087998 systemd[1]: Set up automount boot.automount. May 17 00:41:56.088031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:41:56.088062 systemd[1]: Reached target integritysetup.target. May 17 00:41:56.088095 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:56.088135 systemd[1]: Reached target remote-fs.target. May 17 00:41:56.088173 systemd[1]: Reached target slices.target. May 17 00:41:56.088206 systemd[1]: Reached target swap.target. May 17 00:41:56.088237 systemd[1]: Reached target torcx.target. May 17 00:41:56.088271 systemd[1]: Reached target veritysetup.target. May 17 00:41:56.088302 systemd[1]: Listening on systemd-coredump.socket. May 17 00:41:56.088338 systemd[1]: Listening on systemd-initctl.socket. May 17 00:41:56.088401 kernel: kauditd_printk_skb: 50 callbacks suppressed May 17 00:41:56.088446 kernel: audit: type=1400 audit(1747442515.602:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:56.088478 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:41:56.088516 kernel: audit: type=1335 audit(1747442515.602:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:41:56.088548 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:41:56.088579 systemd[1]: Listening on systemd-journald.socket. May 17 00:41:56.088612 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:56.088645 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:56.088692 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:56.088731 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:41:56.088763 systemd[1]: Mounting dev-hugepages.mount... May 17 00:41:56.088795 systemd[1]: Mounting dev-mqueue.mount... May 17 00:41:56.088832 systemd[1]: Mounting media.mount... May 17 00:41:56.088866 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:56.088900 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:41:56.088932 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:41:56.088962 systemd[1]: Mounting tmp.mount... May 17 00:41:56.088995 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:41:56.089029 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:56.089060 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:56.089095 systemd[1]: Starting modprobe@configfs.service... May 17 00:41:56.089131 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:56.089163 systemd[1]: Starting modprobe@drm.service... May 17 00:41:56.089194 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:56.089227 systemd[1]: Starting modprobe@fuse.service... May 17 00:41:56.089257 systemd[1]: Starting modprobe@loop.service... May 17 00:41:56.089290 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:56.089327 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:41:56.089357 kernel: fuse: init (API version 7.34) May 17 00:41:56.089388 kernel: loop: module loaded May 17 00:41:56.089447 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:41:56.089488 systemd[1]: Starting systemd-journald.service... May 17 00:41:56.089522 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:56.089555 kernel: audit: type=1305 audit(1747442516.080:88): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:41:56.089590 systemd-journald[1029]: Journal started May 17 00:41:56.089721 systemd-journald[1029]: Runtime Journal (/run/log/journal/fec82ec59c58d0c7e7cbdabe7b2e0c78) is 8.0M, max 148.8M, 140.8M free. May 17 00:41:55.602000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:55.602000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:41:56.080000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:41:56.136247 kernel: audit: type=1300 audit(1747442516.080:88): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff4d3bcb40 a2=4000 a3=7fff4d3bcbdc items=0 ppid=1 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:56.136368 systemd[1]: Starting systemd-network-generator.service... May 17 00:41:56.136448 kernel: audit: type=1327 audit(1747442516.080:88): proctitle="/usr/lib/systemd/systemd-journald" May 17 00:41:56.080000 audit[1029]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff4d3bcb40 a2=4000 a3=7fff4d3bcbdc items=0 ppid=1 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:56.080000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:41:56.162452 systemd[1]: Starting systemd-remount-fs.service... May 17 00:41:56.179481 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:56.197467 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:56.208547 systemd[1]: Started systemd-journald.service. May 17 00:41:56.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.218338 systemd[1]: Mounted dev-hugepages.mount. May 17 00:41:56.238482 kernel: audit: type=1130 audit(1747442516.214:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.244786 systemd[1]: Mounted dev-mqueue.mount. May 17 00:41:56.251784 systemd[1]: Mounted media.mount. May 17 00:41:56.258758 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:41:56.268777 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:41:56.277722 systemd[1]: Mounted tmp.mount. May 17 00:41:56.284922 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:41:56.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.294222 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:56.316480 kernel: audit: type=1130 audit(1747442516.292:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.325263 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:41:56.325627 systemd[1]: Finished modprobe@configfs.service. May 17 00:41:56.347496 kernel: audit: type=1130 audit(1747442516.323:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.356309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:56.356669 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:56.400180 kernel: audit: type=1130 audit(1747442516.354:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.400322 kernel: audit: type=1131 audit(1747442516.354:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.409201 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:56.409537 systemd[1]: Finished modprobe@drm.service. May 17 00:41:56.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.418143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:56.418490 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:56.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.427106 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:41:56.427466 systemd[1]: Finished modprobe@fuse.service. May 17 00:41:56.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.436062 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:56.436488 systemd[1]: Finished modprobe@loop.service. May 17 00:41:56.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.445183 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:56.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.454076 systemd[1]: Finished systemd-network-generator.service. May 17 00:41:56.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.463070 systemd[1]: Finished systemd-remount-fs.service. May 17 00:41:56.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.472029 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:56.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.481206 systemd[1]: Reached target network-pre.target. May 17 00:41:56.491467 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:41:56.502768 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:41:56.510603 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:56.514044 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:41:56.523666 systemd[1]: Starting systemd-journal-flush.service... May 17 00:41:56.533038 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:56.535074 systemd[1]: Starting systemd-random-seed.service... May 17 00:41:56.542966 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:56.545069 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:56.549057 systemd-journald[1029]: Time spent on flushing to /var/log/journal/fec82ec59c58d0c7e7cbdabe7b2e0c78 is 75.255ms for 1089 entries. May 17 00:41:56.549057 systemd-journald[1029]: System Journal (/var/log/journal/fec82ec59c58d0c7e7cbdabe7b2e0c78) is 8.0M, max 584.8M, 576.8M free. May 17 00:41:56.659365 systemd-journald[1029]: Received client request to flush runtime journal. May 17 00:41:56.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.562591 systemd[1]: Starting systemd-sysusers.service... May 17 00:41:56.571558 systemd[1]: Starting systemd-udev-settle.service... May 17 00:41:56.582887 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:41:56.661547 udevadm[1052]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:41:56.591700 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:41:56.601041 systemd[1]: Finished systemd-random-seed.service. May 17 00:41:56.610103 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:56.622151 systemd[1]: Reached target first-boot-complete.target. May 17 00:41:56.661017 systemd[1]: Finished systemd-journal-flush.service. May 17 00:41:56.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.670292 systemd[1]: Finished systemd-sysusers.service. May 17 00:41:56.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:56.681611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:56.741968 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:56.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:57.315920 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:41:57.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:57.326618 systemd[1]: Starting systemd-udevd.service... May 17 00:41:57.353131 systemd-udevd[1062]: Using default interface naming scheme 'v252'. May 17 00:41:57.404137 systemd[1]: Started systemd-udevd.service. May 17 00:41:57.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:57.416435 systemd[1]: Starting systemd-networkd.service... May 17 00:41:57.432819 systemd[1]: Starting systemd-userdbd.service... May 17 00:41:57.499690 systemd[1]: Found device dev-ttyS0.device. May 17 00:41:57.523915 systemd[1]: Started systemd-userdbd.service. May 17 00:41:57.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:57.699682 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:41:57.725122 systemd-networkd[1075]: lo: Link UP May 17 00:41:57.725708 systemd-networkd[1075]: lo: Gained carrier May 17 00:41:57.726790 systemd-networkd[1075]: Enumeration completed May 17 00:41:57.727128 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:41:57.727131 systemd[1]: Started systemd-networkd.service. May 17 00:41:57.730225 systemd-networkd[1075]: eth0: Link UP May 17 00:41:57.730374 systemd-networkd[1075]: eth0: Gained carrier May 17 00:41:57.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:57.743661 systemd-networkd[1075]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260.c.flatcar-212911.internal' to 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:41:57.743690 systemd-networkd[1075]: eth0: DHCPv4 address 10.128.0.56/32, gateway 10.128.0.1 acquired from 169.254.169.254 May 17 00:41:57.763267 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:57.782660 kernel: ACPI: button: Power Button [PWRF] May 17 00:41:57.782773 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 17 00:41:57.789483 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:41:57.800000 audit[1087]: AVC avc: denied { confidentiality } for pid=1087 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:41:57.800000 audit[1087]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563b24ec4b10 a1=338ac a2=7facfd63bbc5 a3=5 items=110 ppid=1062 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:57.800000 audit: CWD cwd="/" May 17 00:41:57.800000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=1 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=2 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=3 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=4 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=5 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=6 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=7 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=8 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=9 name=(null) inode=14660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=10 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=11 name=(null) inode=14661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=12 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=13 name=(null) inode=14662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=14 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=15 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=16 name=(null) inode=14659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=17 name=(null) inode=14664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=18 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=19 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=20 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=21 name=(null) inode=14666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=22 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=23 name=(null) inode=14667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=24 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=25 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=26 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=27 name=(null) inode=14669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=28 name=(null) inode=14665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=29 name=(null) inode=14670 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=30 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=31 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=32 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=33 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=34 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=35 name=(null) inode=14673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=36 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=37 name=(null) inode=14674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=38 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=39 name=(null) inode=14675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=40 name=(null) inode=14671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=41 name=(null) inode=14676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=42 name=(null) inode=14656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=43 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=44 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=45 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=46 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=47 name=(null) inode=14679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=48 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=49 name=(null) inode=14680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=50 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=51 name=(null) inode=14681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=52 name=(null) inode=14677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=53 name=(null) inode=14682 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=55 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=56 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=57 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=58 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=59 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=60 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=61 name=(null) inode=14686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=62 name=(null) inode=14686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=63 name=(null) inode=14687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=64 name=(null) inode=14686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=65 name=(null) inode=14688 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=66 name=(null) inode=14686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=67 name=(null) inode=14689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=68 name=(null) inode=14686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=69 name=(null) inode=14690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=70 name=(null) inode=14686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=71 name=(null) inode=14691 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=72 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=73 name=(null) inode=14692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=74 name=(null) inode=14692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=75 name=(null) inode=14693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=76 name=(null) inode=14692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=77 name=(null) inode=14694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=78 name=(null) inode=14692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=79 name=(null) inode=14695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=80 name=(null) inode=14692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=81 name=(null) inode=14696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=82 name=(null) inode=14692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=83 name=(null) inode=14697 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=84 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=85 name=(null) inode=14698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=86 name=(null) inode=14698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=87 name=(null) inode=14699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=88 name=(null) inode=14698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=89 name=(null) inode=14700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=90 name=(null) inode=14698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=91 name=(null) inode=14701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=92 name=(null) inode=14698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=93 name=(null) inode=14702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=94 name=(null) inode=14698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=95 name=(null) inode=14703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=96 name=(null) inode=14683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=97 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=98 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=99 name=(null) inode=14705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=100 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=101 name=(null) inode=14706 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=102 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=103 name=(null) inode=14707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=104 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=105 name=(null) inode=14708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=106 name=(null) inode=14704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=107 name=(null) inode=14709 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PATH item=109 name=(null) inode=14710 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:57.800000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:41:57.867449 kernel: EDAC MC: Ver: 3.0.0 May 17 00:41:57.882470 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:41:57.899456 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:41:57.905825 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:41:57.932385 systemd[1]: Finished systemd-udev-settle.service. May 17 00:41:57.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:57.942606 systemd[1]: Starting lvm2-activation-early.service... May 17 00:41:57.972810 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:58.007109 systemd[1]: Finished lvm2-activation-early.service. May 17 00:41:58.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.015940 systemd[1]: Reached target cryptsetup.target. May 17 00:41:58.026309 systemd[1]: Starting lvm2-activation.service... May 17 00:41:58.033137 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:58.065186 systemd[1]: Finished lvm2-activation.service. May 17 00:41:58.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.073964 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:58.082601 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:41:58.082665 systemd[1]: Reached target local-fs.target. May 17 00:41:58.091604 systemd[1]: Reached target machines.target. May 17 00:41:58.102403 systemd[1]: Starting ldconfig.service... May 17 00:41:58.110687 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:58.110797 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:58.112850 systemd[1]: Starting systemd-boot-update.service... May 17 00:41:58.121751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:41:58.133975 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:41:58.137129 systemd[1]: Starting systemd-sysext.service... May 17 00:41:58.146539 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1105 (bootctl) May 17 00:41:58.148763 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:41:58.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.158788 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:41:58.175167 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:41:58.183653 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:41:58.184123 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:41:58.218476 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:41:58.324094 systemd-fsck[1117]: fsck.fat 4.2 (2021-01-31) May 17 00:41:58.324094 systemd-fsck[1117]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:41:58.327192 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:41:58.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.340128 systemd[1]: Mounting boot.mount... May 17 00:41:58.374645 systemd[1]: Mounted boot.mount. May 17 00:41:58.406101 systemd[1]: Finished systemd-boot-update.service. May 17 00:41:58.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.604872 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:41:58.607372 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:41:58.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.628293 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:41:58.669837 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:41:58.696364 (sd-sysext)[1129]: Using extensions 'kubernetes'. May 17 00:41:58.697162 (sd-sysext)[1129]: Merged extensions into '/usr'. May 17 00:41:58.730327 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:58.733178 systemd[1]: Mounting usr-share-oem.mount... May 17 00:41:58.741733 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:58.745123 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:58.754453 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:58.764156 systemd[1]: Starting modprobe@loop.service... May 17 00:41:58.771675 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:58.771949 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:58.772195 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:58.779389 systemd[1]: Mounted usr-share-oem.mount. May 17 00:41:58.787217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:58.787547 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:58.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.797752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:58.798055 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:58.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.808623 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:58.809029 systemd[1]: Finished modprobe@loop.service. May 17 00:41:58.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.819675 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:58.819836 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:58.822020 systemd[1]: Finished systemd-sysext.service. May 17 00:41:58.826492 ldconfig[1104]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:41:58.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.833131 systemd[1]: Starting ensure-sysext.service... May 17 00:41:58.842618 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:41:58.851807 systemd[1]: Finished ldconfig.service. May 17 00:41:58.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:58.861955 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:41:58.862330 systemd[1]: Reloading. May 17 00:41:58.864745 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:41:58.869261 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:41:58.973890 /usr/lib/systemd/system-generators/torcx-generator[1163]: time="2025-05-17T00:41:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:58.974642 /usr/lib/systemd/system-generators/torcx-generator[1163]: time="2025-05-17T00:41:58Z" level=info msg="torcx already run" May 17 00:41:59.184906 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:59.184937 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:59.210377 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:59.293638 systemd-networkd[1075]: eth0: Gained IPv6LL May 17 00:41:59.309279 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:41:59.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.323224 systemd[1]: Starting audit-rules.service... May 17 00:41:59.333374 systemd[1]: Starting clean-ca-certificates.service... May 17 00:41:59.344813 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:41:59.356915 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:41:59.369308 systemd[1]: Starting systemd-resolved.service... May 17 00:41:59.380203 systemd[1]: Starting systemd-timesyncd.service... May 17 00:41:59.390942 systemd[1]: Starting systemd-update-utmp.service... May 17 00:41:59.400946 systemd[1]: Finished clean-ca-certificates.service. May 17 00:41:59.402000 audit[1238]: SYSTEM_BOOT pid=1238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:41:59.410712 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:41:59.411209 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:41:59.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.429968 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:59.430691 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:59.435514 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:59.445059 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:59.455244 systemd[1]: Starting modprobe@loop.service... May 17 00:41:59.465543 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:41:59.473659 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:59.474056 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:59.474392 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:59.474678 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:59.479133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:59.479494 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:59.482192 enable-oslogin[1254]: /etc/pam.d/sshd already exists. Not enabling OS Login May 17 00:41:59.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.488979 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:41:59.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:59.495000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:41:59.495000 audit[1251]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcab0a10c0 a2=420 a3=0 items=0 ppid=1215 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:59.495000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:41:59.498106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:59.499174 augenrules[1251]: No rules May 17 00:41:59.498481 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:59.507960 systemd[1]: Finished audit-rules.service. May 17 00:41:59.516729 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:59.517064 systemd[1]: Finished modprobe@loop.service. May 17 00:41:59.526673 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:41:59.527114 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:41:59.536818 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:59.537214 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:59.540484 systemd[1]: Starting systemd-update-done.service... May 17 00:41:59.548782 systemd[1]: Finished systemd-update-utmp.service. May 17 00:41:59.562961 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:59.563629 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:59.568319 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:59.578065 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:59.588047 systemd[1]: Starting modprobe@loop.service... May 17 00:41:59.598166 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:41:59.606160 enable-oslogin[1272]: /etc/pam.d/sshd already exists. Not enabling OS Login May 17 00:41:59.607649 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:59.607963 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:59.608270 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:59.608542 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:59.611249 systemd[1]: Finished systemd-update-done.service. May 17 00:41:59.620493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:59.620824 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:59.630439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:59.630775 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:59.640401 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:59.640743 systemd[1]: Finished modprobe@loop.service. May 17 00:41:59.650528 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:41:59.650973 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:41:59.661286 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:59.661539 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:59.669443 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:59.670277 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:59.674815 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:59.683873 systemd[1]: Starting modprobe@drm.service... May 17 00:41:59.693052 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:59.703179 systemd[1]: Starting modprobe@loop.service... May 17 00:41:59.709101 systemd-resolved[1229]: Positive Trust Anchors: May 17 00:41:59.709124 systemd-resolved[1229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:59.709207 systemd-resolved[1229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:59.713279 systemd[1]: Starting oem-gce-enable-oslogin.service... May 17 00:41:59.719821 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:59.720124 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:59.722929 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:41:59.723672 systemd-timesyncd[1233]: Contacted time server 169.254.169.254:123 (169.254.169.254). May 17 00:41:59.723748 systemd-timesyncd[1233]: Initial clock synchronization to Sat 2025-05-17 00:41:59.751918 UTC. May 17 00:41:59.730989 enable-oslogin[1283]: /etc/pam.d/sshd already exists. Not enabling OS Login May 17 00:41:59.731642 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:59.731898 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:59.735287 systemd[1]: Started systemd-timesyncd.service. May 17 00:41:59.747572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:59.747902 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:59.756326 systemd-resolved[1229]: Defaulting to hostname 'linux'. May 17 00:41:59.758681 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:59.759022 systemd[1]: Finished modprobe@drm.service. May 17 00:41:59.769011 systemd[1]: Started systemd-resolved.service. May 17 00:41:59.778304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:59.778654 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:59.788253 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:59.788569 systemd[1]: Finished modprobe@loop.service. May 17 00:41:59.797229 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. May 17 00:41:59.797635 systemd[1]: Finished oem-gce-enable-oslogin.service. May 17 00:41:59.807292 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:41:59.818553 systemd[1]: Reached target network.target. May 17 00:41:59.826689 systemd[1]: Reached target network-online.target. May 17 00:41:59.835571 systemd[1]: Reached target nss-lookup.target. May 17 00:41:59.843576 systemd[1]: Reached target time-set.target. May 17 00:41:59.851632 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:59.851710 systemd[1]: Reached target sysinit.target. May 17 00:41:59.860694 systemd[1]: Started motdgen.path. May 17 00:41:59.867647 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:41:59.877802 systemd[1]: Started logrotate.timer. May 17 00:41:59.884735 systemd[1]: Started mdadm.timer. May 17 00:41:59.891600 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:41:59.899567 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:41:59.899643 systemd[1]: Reached target paths.target. May 17 00:41:59.906571 systemd[1]: Reached target timers.target. May 17 00:41:59.914031 systemd[1]: Listening on dbus.socket. May 17 00:41:59.923253 systemd[1]: Starting docker.socket... May 17 00:41:59.933868 systemd[1]: Listening on sshd.socket. May 17 00:41:59.940675 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:59.940784 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:59.941783 systemd[1]: Finished ensure-sysext.service. May 17 00:41:59.950774 systemd[1]: Listening on docker.socket. May 17 00:41:59.958623 systemd[1]: Reached target sockets.target. May 17 00:41:59.966564 systemd[1]: Reached target basic.target. May 17 00:41:59.973848 systemd[1]: System is tainted: cgroupsv1 May 17 00:41:59.973938 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:59.973987 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:59.975768 systemd[1]: Starting containerd.service... May 17 00:41:59.984411 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:41:59.994900 systemd[1]: Starting dbus.service... May 17 00:42:00.002566 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:42:00.011808 systemd[1]: Starting extend-filesystems.service... May 17 00:42:00.018603 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:42:00.021361 systemd[1]: Starting kubelet.service... May 17 00:42:00.029000 jq[1296]: false May 17 00:42:00.031743 systemd[1]: Starting motdgen.service... May 17 00:42:00.040773 systemd[1]: Starting oem-gce.service... May 17 00:42:00.051253 systemd[1]: Starting prepare-helm.service... May 17 00:42:00.060776 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:42:00.069987 systemd[1]: Starting sshd-keygen.service... May 17 00:42:00.081768 systemd[1]: Starting systemd-logind.service... May 17 00:42:00.088594 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:42:00.088743 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). May 17 00:42:00.091281 systemd[1]: Starting update-engine.service... May 17 00:42:00.115193 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:42:00.129988 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:42:00.130518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:42:00.140186 jq[1324]: true May 17 00:42:00.144245 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:42:00.146787 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:42:00.196097 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:42:00.196614 systemd[1]: Finished motdgen.service. May 17 00:42:00.214161 mkfs.ext4[1336]: mke2fs 1.46.5 (30-Dec-2021) May 17 00:42:00.221371 extend-filesystems[1297]: Found loop1 May 17 00:42:00.228950 jq[1334]: true May 17 00:42:00.236464 mkfs.ext4[1336]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done May 17 00:42:00.236464 mkfs.ext4[1336]: Creating filesystem with 262144 4k blocks and 65536 inodes May 17 00:42:00.236464 mkfs.ext4[1336]: Filesystem UUID: 213fe43f-b52e-4224-ac7a-77035ba62050 May 17 00:42:00.236464 mkfs.ext4[1336]: Superblock backups stored on blocks: May 17 00:42:00.236464 mkfs.ext4[1336]: 32768, 98304, 163840, 229376 May 17 00:42:00.236464 mkfs.ext4[1336]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 17 00:42:00.236464 mkfs.ext4[1336]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 17 00:42:00.243131 extend-filesystems[1297]: Found sda May 17 00:42:00.243131 extend-filesystems[1297]: Found sda1 May 17 00:42:00.243131 extend-filesystems[1297]: Found sda2 May 17 00:42:00.243131 extend-filesystems[1297]: Found sda3 May 17 00:42:00.243131 extend-filesystems[1297]: Found usr May 17 00:42:00.243131 extend-filesystems[1297]: Found sda4 May 17 00:42:00.243131 extend-filesystems[1297]: Found sda6 May 17 00:42:00.243131 extend-filesystems[1297]: Found sda7 May 17 00:42:00.243131 extend-filesystems[1297]: Found sda9 May 17 00:42:00.243131 extend-filesystems[1297]: Checking size of /dev/sda9 May 17 00:42:00.321606 extend-filesystems[1297]: Resized partition /dev/sda9 May 17 00:42:00.339536 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks May 17 00:42:00.339652 mkfs.ext4[1336]: Creating journal (8192 blocks): done May 17 00:42:00.339652 mkfs.ext4[1336]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done May 17 00:42:00.339783 tar[1330]: linux-amd64/helm May 17 00:42:00.340191 update_engine[1319]: I0517 00:42:00.334836 1319 main.cc:92] Flatcar Update Engine starting May 17 00:42:00.340617 extend-filesystems[1355]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:42:00.361812 umount[1368]: umount: /var/lib/flatcar-oem-gce.img: not mounted. May 17 00:42:00.397264 dbus-daemon[1295]: [system] SELinux support is enabled May 17 00:42:00.398171 systemd[1]: Started dbus.service. May 17 00:42:00.403885 dbus-daemon[1295]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1075 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:42:00.418944 kernel: loop2: detected capacity change from 0 to 2097152 May 17 00:42:00.410226 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:42:00.410297 systemd[1]: Reached target system-config.target. May 17 00:42:00.422712 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:42:00.422769 systemd[1]: Reached target user-config.target. May 17 00:42:00.455079 kernel: EXT4-fs (sda9): resized filesystem to 2538491 May 17 00:42:00.455162 update_engine[1319]: I0517 00:42:00.423504 1319 update_check_scheduler.cc:74] Next update check in 2m36s May 17 00:42:00.440879 dbus-daemon[1295]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:42:00.439624 systemd[1]: Started update-engine.service. May 17 00:42:00.453836 systemd[1]: Started locksmithd.service. May 17 00:42:00.459298 bash[1369]: Updated "/home/core/.ssh/authorized_keys" May 17 00:42:00.498125 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:42:00.498401 extend-filesystems[1355]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:42:00.498401 extend-filesystems[1355]: old_desc_blocks = 1, new_desc_blocks = 2 May 17 00:42:00.498401 extend-filesystems[1355]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. May 17 00:42:00.463944 systemd[1]: Starting systemd-hostnamed.service... May 17 00:42:00.538921 extend-filesystems[1297]: Resized filesystem in /dev/sda9 May 17 00:42:00.472407 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:42:00.492577 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:42:00.493032 systemd[1]: Finished extend-filesystems.service. May 17 00:42:00.616256 env[1335]: time="2025-05-17T00:42:00.616183654Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:42:00.762524 systemd-logind[1312]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:42:00.798550 systemd-logind[1312]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:42:00.798845 systemd-logind[1312]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:42:00.804863 systemd-logind[1312]: New seat seat0. May 17 00:42:00.817985 systemd[1]: Started systemd-logind.service. May 17 00:42:00.882369 coreos-metadata[1294]: May 17 00:42:00.882 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 May 17 00:42:00.897544 env[1335]: time="2025-05-17T00:42:00.897489282Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:42:00.899495 coreos-metadata[1294]: May 17 00:42:00.899 INFO Fetch failed with 404: resource not found May 17 00:42:00.899630 coreos-metadata[1294]: May 17 00:42:00.899 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 May 17 00:42:00.899964 env[1335]: time="2025-05-17T00:42:00.899928364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:00.900247 coreos-metadata[1294]: May 17 00:42:00.900 INFO Fetch successful May 17 00:42:00.900325 coreos-metadata[1294]: May 17 00:42:00.900 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 May 17 00:42:00.901063 coreos-metadata[1294]: May 17 00:42:00.900 INFO Fetch failed with 404: resource not found May 17 00:42:00.901341 coreos-metadata[1294]: May 17 00:42:00.901 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 May 17 00:42:00.903028 env[1335]: time="2025-05-17T00:42:00.902983195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:42:00.908134 env[1335]: time="2025-05-17T00:42:00.908090156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:00.908684 env[1335]: time="2025-05-17T00:42:00.908647778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:42:00.908889 env[1335]: time="2025-05-17T00:42:00.908860491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:42:00.909020 env[1335]: time="2025-05-17T00:42:00.908993844Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:42:00.909150 env[1335]: time="2025-05-17T00:42:00.909124398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:42:00.909865 coreos-metadata[1294]: May 17 00:42:00.909 INFO Fetch failed with 404: resource not found May 17 00:42:00.910109 coreos-metadata[1294]: May 17 00:42:00.909 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 May 17 00:42:00.910776 env[1335]: time="2025-05-17T00:42:00.910741052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:00.911295 env[1335]: time="2025-05-17T00:42:00.911265934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:42:00.912276 coreos-metadata[1294]: May 17 00:42:00.912 INFO Fetch successful May 17 00:42:00.913222 env[1335]: time="2025-05-17T00:42:00.913182781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:42:00.916522 env[1335]: time="2025-05-17T00:42:00.916464814Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:42:00.916580 unknown[1294]: wrote ssh authorized keys file for user: core May 17 00:42:00.917734 env[1335]: time="2025-05-17T00:42:00.917691093Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:42:00.917855 env[1335]: time="2025-05-17T00:42:00.917735954Z" level=info msg="metadata content store policy set" policy=shared May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925407528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925486160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925512021Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925579334Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925612181Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925718845Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925743998Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925771529Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925799164Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925826860Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925853453Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.925879632Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.926047282Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:42:00.926376 env[1335]: time="2025-05-17T00:42:00.926174678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.926827768Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.926877192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.926905237Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.926970768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.926995467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.927022001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.927045162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.927068528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.927093128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.927113713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927155 env[1335]: time="2025-05-17T00:42:00.927135256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927816 env[1335]: time="2025-05-17T00:42:00.927159260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:42:00.927816 env[1335]: time="2025-05-17T00:42:00.927354253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927816 env[1335]: time="2025-05-17T00:42:00.927380509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:42:00.927816 env[1335]: time="2025-05-17T00:42:00.927408078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:42:00.929514 env[1335]: time="2025-05-17T00:42:00.929475684Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:42:00.929618 env[1335]: time="2025-05-17T00:42:00.929528819Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:42:00.929618 env[1335]: time="2025-05-17T00:42:00.929553457Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:42:00.929618 env[1335]: time="2025-05-17T00:42:00.929584328Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:42:00.929780 env[1335]: time="2025-05-17T00:42:00.929637358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:42:00.930092 env[1335]: time="2025-05-17T00:42:00.929993441Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:42:00.933448 env[1335]: time="2025-05-17T00:42:00.930108650Z" level=info msg="Connect containerd service" May 17 00:42:00.933448 env[1335]: time="2025-05-17T00:42:00.930168881Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:42:00.933448 env[1335]: time="2025-05-17T00:42:00.930948826Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:42:00.933448 env[1335]: time="2025-05-17T00:42:00.931317152Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:42:00.933448 env[1335]: time="2025-05-17T00:42:00.931385818Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:42:00.938804 systemd[1]: Started containerd.service. May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.938964484Z" level=info msg="containerd successfully booted in 0.381939s" May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.939577220Z" level=info msg="Start subscribing containerd event" May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.939862547Z" level=info msg="Start recovering state" May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.939978217Z" level=info msg="Start event monitor" May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.940011253Z" level=info msg="Start snapshots syncer" May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.940033019Z" level=info msg="Start cni network conf syncer for default" May 17 00:42:00.942835 env[1335]: time="2025-05-17T00:42:00.940046256Z" level=info msg="Start streaming server" May 17 00:42:00.960593 update-ssh-keys[1402]: Updated "/home/core/.ssh/authorized_keys" May 17 00:42:00.961332 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:42:01.202869 dbus-daemon[1295]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:42:01.203174 systemd[1]: Started systemd-hostnamed.service. May 17 00:42:01.203710 dbus-daemon[1295]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1378 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:42:01.215544 systemd[1]: Starting polkit.service... May 17 00:42:01.295756 polkitd[1407]: Started polkitd version 121 May 17 00:42:01.321264 polkitd[1407]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:42:01.321372 polkitd[1407]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:42:01.328477 polkitd[1407]: Finished loading, compiling and executing 2 rules May 17 00:42:01.329323 dbus-daemon[1295]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:42:01.329699 systemd[1]: Started polkit.service. May 17 00:42:01.330452 polkitd[1407]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:42:01.376361 systemd-hostnamed[1378]: Hostname set to (transient) May 17 00:42:01.379390 systemd-resolved[1229]: System hostname changed to 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260'. May 17 00:42:02.381767 tar[1330]: linux-amd64/LICENSE May 17 00:42:02.381767 tar[1330]: linux-amd64/README.md May 17 00:42:02.394795 systemd[1]: Finished prepare-helm.service. May 17 00:42:02.585131 sshd_keygen[1331]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:42:02.689560 systemd[1]: Finished sshd-keygen.service. May 17 00:42:02.700095 systemd[1]: Starting issuegen.service... May 17 00:42:02.710526 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:42:02.710950 systemd[1]: Finished issuegen.service. May 17 00:42:02.722252 systemd[1]: Starting systemd-user-sessions.service... May 17 00:42:02.745108 systemd[1]: Finished systemd-user-sessions.service. May 17 00:42:02.756786 systemd[1]: Started getty@tty1.service. May 17 00:42:02.769065 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:42:02.778169 systemd[1]: Reached target getty.target. May 17 00:42:02.830828 systemd[1]: Started kubelet.service. May 17 00:42:02.909342 locksmithd[1375]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:42:03.850897 kubelet[1442]: E0517 00:42:03.850827 1442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:42:03.853611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:42:03.853994 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:42:06.425180 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. May 17 00:42:08.568506 kernel: loop2: detected capacity change from 0 to 2097152 May 17 00:42:08.592163 systemd-nspawn[1451]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. May 17 00:42:08.592163 systemd-nspawn[1451]: Press ^] three times within 1s to kill container. May 17 00:42:08.606471 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:42:08.677628 systemd[1]: Started oem-gce.service. May 17 00:42:08.678184 systemd[1]: Reached target multi-user.target. May 17 00:42:08.680803 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:42:08.693343 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:42:08.693820 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:42:08.694499 systemd[1]: Startup finished in 9.679s (kernel) + 17.263s (userspace) = 26.943s. May 17 00:42:08.741727 systemd-nspawn[1451]: + '[' -e /etc/default/instance_configs.cfg.template ']' May 17 00:42:08.741727 systemd-nspawn[1451]: + echo -e '[InstanceSetup]\nset_host_keys = false' May 17 00:42:08.742035 systemd-nspawn[1451]: + /usr/bin/google_instance_setup May 17 00:42:09.055688 systemd[1]: Created slice system-sshd.slice. May 17 00:42:09.058061 systemd[1]: Started sshd@0-10.128.0.56:22-139.178.89.65:58910.service. May 17 00:42:09.389745 sshd[1460]: Accepted publickey for core from 139.178.89.65 port 58910 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:09.395194 sshd[1460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:09.415974 systemd[1]: Created slice user-500.slice. May 17 00:42:09.417960 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:42:09.437132 systemd-logind[1312]: New session 1 of user core. May 17 00:42:09.449847 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:42:09.452510 systemd[1]: Starting user@500.service... May 17 00:42:09.471955 (systemd)[1469]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:09.481933 instance-setup[1459]: INFO Running google_set_multiqueue. May 17 00:42:09.506262 instance-setup[1459]: INFO Set channels for eth0 to 2. May 17 00:42:09.512576 instance-setup[1459]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. May 17 00:42:09.514787 instance-setup[1459]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 May 17 00:42:09.515494 instance-setup[1459]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. May 17 00:42:09.517508 instance-setup[1459]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 May 17 00:42:09.518163 instance-setup[1459]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. May 17 00:42:09.520520 instance-setup[1459]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 May 17 00:42:09.520926 instance-setup[1459]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. May 17 00:42:09.523677 instance-setup[1459]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 May 17 00:42:09.540333 instance-setup[1459]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus May 17 00:42:09.540771 instance-setup[1459]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus May 17 00:42:09.598339 systemd-nspawn[1451]: + /usr/bin/google_metadata_script_runner --script-type startup May 17 00:42:09.656239 systemd[1469]: Queued start job for default target default.target. May 17 00:42:09.657781 systemd[1469]: Reached target paths.target. May 17 00:42:09.657838 systemd[1469]: Reached target sockets.target. May 17 00:42:09.657870 systemd[1469]: Reached target timers.target. May 17 00:42:09.657897 systemd[1469]: Reached target basic.target. May 17 00:42:09.658119 systemd[1]: Started user@500.service. May 17 00:42:09.659778 systemd[1]: Started session-1.scope. May 17 00:42:09.663419 systemd[1469]: Reached target default.target. May 17 00:42:09.663848 systemd[1469]: Startup finished in 180ms. May 17 00:42:09.886922 systemd[1]: Started sshd@1-10.128.0.56:22-139.178.89.65:58914.service. May 17 00:42:10.055656 startup-script[1501]: INFO Starting startup scripts. May 17 00:42:10.069421 startup-script[1501]: INFO No startup scripts found in metadata. May 17 00:42:10.069656 startup-script[1501]: INFO Finished running startup scripts. May 17 00:42:10.109176 systemd-nspawn[1451]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM May 17 00:42:10.109389 systemd-nspawn[1451]: + daemon_pids=() May 17 00:42:10.109389 systemd-nspawn[1451]: + for d in accounts clock_skew network May 17 00:42:10.109591 systemd-nspawn[1451]: + daemon_pids+=($!) May 17 00:42:10.109693 systemd-nspawn[1451]: + for d in accounts clock_skew network May 17 00:42:10.109979 systemd-nspawn[1451]: + daemon_pids+=($!) May 17 00:42:10.110114 systemd-nspawn[1451]: + /usr/bin/google_accounts_daemon May 17 00:42:10.110388 systemd-nspawn[1451]: + for d in accounts clock_skew network May 17 00:42:10.110683 systemd-nspawn[1451]: + daemon_pids+=($!) May 17 00:42:10.110811 systemd-nspawn[1451]: + NOTIFY_SOCKET=/run/systemd/notify May 17 00:42:10.110811 systemd-nspawn[1451]: + /usr/bin/systemd-notify --ready May 17 00:42:10.111274 systemd-nspawn[1451]: + /usr/bin/google_network_daemon May 17 00:42:10.120041 systemd-nspawn[1451]: + /usr/bin/google_clock_skew_daemon May 17 00:42:10.171973 systemd-nspawn[1451]: + wait -n 36 37 38 May 17 00:42:10.199609 sshd[1505]: Accepted publickey for core from 139.178.89.65 port 58914 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:10.200783 sshd[1505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:10.210030 systemd[1]: Started session-2.scope. May 17 00:42:10.212403 systemd-logind[1312]: New session 2 of user core. May 17 00:42:10.420940 sshd[1505]: pam_unix(sshd:session): session closed for user core May 17 00:42:10.426667 systemd[1]: sshd@1-10.128.0.56:22-139.178.89.65:58914.service: Deactivated successfully. May 17 00:42:10.427946 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:42:10.430813 systemd-logind[1312]: Session 2 logged out. Waiting for processes to exit. May 17 00:42:10.433089 systemd-logind[1312]: Removed session 2. May 17 00:42:10.465154 systemd[1]: Started sshd@2-10.128.0.56:22-139.178.89.65:58916.service. May 17 00:42:10.767522 sshd[1518]: Accepted publickey for core from 139.178.89.65 port 58916 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:10.769067 sshd[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:10.777947 systemd[1]: Started session-3.scope. May 17 00:42:10.779501 systemd-logind[1312]: New session 3 of user core. May 17 00:42:10.924265 groupadd[1528]: group added to /etc/group: name=google-sudoers, GID=1000 May 17 00:42:10.928915 groupadd[1528]: group added to /etc/gshadow: name=google-sudoers May 17 00:42:10.958977 groupadd[1528]: new group: name=google-sudoers, GID=1000 May 17 00:42:10.959210 google-clock-skew[1510]: INFO Starting Google Clock Skew daemon. May 17 00:42:10.977561 sshd[1518]: pam_unix(sshd:session): session closed for user core May 17 00:42:10.982358 systemd[1]: sshd@2-10.128.0.56:22-139.178.89.65:58916.service: Deactivated successfully. May 17 00:42:10.983716 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:42:10.986957 systemd-logind[1312]: Session 3 logged out. Waiting for processes to exit. May 17 00:42:10.990971 google-clock-skew[1510]: INFO Clock drift token has changed: 0. May 17 00:42:10.988876 systemd-logind[1312]: Removed session 3. May 17 00:42:11.001126 systemd-nspawn[1451]: hwclock: Cannot access the Hardware Clock via any known method. May 17 00:42:11.003228 systemd-nspawn[1451]: hwclock: Use the --verbose option to see the details of our search for an access method. May 17 00:42:11.005566 google-clock-skew[1510]: WARNING Failed to sync system time with hardware clock. May 17 00:42:11.011001 google-accounts[1509]: INFO Starting Google Accounts daemon. May 17 00:42:11.021093 systemd[1]: Started sshd@3-10.128.0.56:22-139.178.89.65:58930.service. May 17 00:42:11.055875 google-networking[1511]: INFO Starting Google Networking daemon. May 17 00:42:11.060831 google-accounts[1509]: WARNING OS Login not installed. May 17 00:42:11.061922 google-accounts[1509]: INFO Creating a new user account for 0. May 17 00:42:11.069032 systemd-nspawn[1451]: useradd: invalid user name '0': use --badname to ignore May 17 00:42:11.070085 google-accounts[1509]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. May 17 00:42:11.316629 sshd[1539]: Accepted publickey for core from 139.178.89.65 port 58930 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:11.319043 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:11.326324 systemd[1]: Started session-4.scope. May 17 00:42:11.326697 systemd-logind[1312]: New session 4 of user core. May 17 00:42:11.533683 sshd[1539]: pam_unix(sshd:session): session closed for user core May 17 00:42:11.538554 systemd[1]: sshd@3-10.128.0.56:22-139.178.89.65:58930.service: Deactivated successfully. May 17 00:42:11.540482 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:42:11.540577 systemd-logind[1312]: Session 4 logged out. Waiting for processes to exit. May 17 00:42:11.542711 systemd-logind[1312]: Removed session 4. May 17 00:42:11.578530 systemd[1]: Started sshd@4-10.128.0.56:22-139.178.89.65:58946.service. May 17 00:42:11.865465 sshd[1550]: Accepted publickey for core from 139.178.89.65 port 58946 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:11.867639 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:11.875106 systemd[1]: Started session-5.scope. May 17 00:42:11.875484 systemd-logind[1312]: New session 5 of user core. May 17 00:42:12.066857 sudo[1554]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:42:12.067409 sudo[1554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:42:12.076742 dbus-daemon[1295]: \xd0]\xa2\xf5\xfbU: received setenforce notice (enforcing=1720048624) May 17 00:42:12.079631 sudo[1554]: pam_unix(sudo:session): session closed for user root May 17 00:42:12.124856 sshd[1550]: pam_unix(sshd:session): session closed for user core May 17 00:42:12.130609 systemd[1]: sshd@4-10.128.0.56:22-139.178.89.65:58946.service: Deactivated successfully. May 17 00:42:12.132323 systemd-logind[1312]: Session 5 logged out. Waiting for processes to exit. May 17 00:42:12.132515 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:42:12.134927 systemd-logind[1312]: Removed session 5. May 17 00:42:12.170860 systemd[1]: Started sshd@5-10.128.0.56:22-139.178.89.65:58956.service. May 17 00:42:12.460213 sshd[1558]: Accepted publickey for core from 139.178.89.65 port 58956 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:12.462013 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:12.469548 systemd-logind[1312]: New session 6 of user core. May 17 00:42:12.470087 systemd[1]: Started session-6.scope. May 17 00:42:12.640589 sudo[1563]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:42:12.641087 sudo[1563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:42:12.645879 sudo[1563]: pam_unix(sudo:session): session closed for user root May 17 00:42:12.659215 sudo[1562]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:42:12.659732 sudo[1562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:42:12.674304 systemd[1]: Stopping audit-rules.service... May 17 00:42:12.674000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:42:12.681813 kernel: kauditd_printk_skb: 163 callbacks suppressed May 17 00:42:12.681914 kernel: audit: type=1305 audit(1747442532.674:142): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:42:12.681960 auditctl[1566]: No rules May 17 00:42:12.683209 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:42:12.683723 systemd[1]: Stopped audit-rules.service. May 17 00:42:12.688127 systemd[1]: Starting audit-rules.service... May 17 00:42:12.674000 audit[1566]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9568e470 a2=420 a3=0 items=0 ppid=1 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:12.741395 kernel: audit: type=1300 audit(1747442532.674:142): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9568e470 a2=420 a3=0 items=0 ppid=1 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:12.741548 kernel: audit: type=1327 audit(1747442532.674:142): proctitle=2F7362696E2F617564697463746C002D44 May 17 00:42:12.674000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 00:42:12.741686 augenrules[1584]: No rules May 17 00:42:12.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:12.742407 systemd[1]: Finished audit-rules.service. May 17 00:42:12.755453 sudo[1562]: pam_unix(sudo:session): session closed for user root May 17 00:42:12.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:12.764465 kernel: audit: type=1131 audit(1747442532.680:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:12.764540 kernel: audit: type=1130 audit(1747442532.741:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:12.754000 audit[1562]: USER_END pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:12.811161 kernel: audit: type=1106 audit(1747442532.754:145): pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:12.834604 kernel: audit: type=1104 audit(1747442532.754:146): pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:12.754000 audit[1562]: CRED_DISP pid=1562 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:12.820437 systemd-logind[1312]: Session 6 logged out. Waiting for processes to exit. May 17 00:42:12.814725 sshd[1558]: pam_unix(sshd:session): session closed for user core May 17 00:42:12.823274 systemd[1]: sshd@5-10.128.0.56:22-139.178.89.65:58956.service: Deactivated successfully. May 17 00:42:12.825086 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:42:12.827955 systemd-logind[1312]: Removed session 6. May 17 00:42:12.815000 audit[1558]: USER_END pid=1558 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:12.892141 kernel: audit: type=1106 audit(1747442532.815:147): pid=1558 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:12.892278 kernel: audit: type=1104 audit(1747442532.815:148): pid=1558 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:12.815000 audit[1558]: CRED_DISP pid=1558 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:12.874531 systemd[1]: Started sshd@6-10.128.0.56:22-139.178.89.65:58966.service. May 17 00:42:12.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.56:22-139.178.89.65:58956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:12.921569 kernel: audit: type=1131 audit(1747442532.822:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.56:22-139.178.89.65:58956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:12.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.56:22-139.178.89.65:58966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:13.168000 audit[1591]: USER_ACCT pid=1591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:13.170720 sshd[1591]: Accepted publickey for core from 139.178.89.65 port 58966 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:42:13.170000 audit[1591]: CRED_ACQ pid=1591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:13.170000 audit[1591]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3493e020 a2=3 a3=0 items=0 ppid=1 pid=1591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:13.170000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:42:13.171653 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:13.179478 systemd[1]: Started session-7.scope. May 17 00:42:13.179841 systemd-logind[1312]: New session 7 of user core. May 17 00:42:13.190000 audit[1591]: USER_START pid=1591 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:13.194000 audit[1594]: CRED_ACQ pid=1594 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:13.346000 audit[1595]: USER_ACCT pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:13.347623 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:42:13.347000 audit[1595]: CRED_REFR pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:13.348138 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:42:13.350000 audit[1595]: USER_START pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:13.383115 systemd[1]: Starting docker.service... May 17 00:42:13.432174 env[1605]: time="2025-05-17T00:42:13.432008601Z" level=info msg="Starting up" May 17 00:42:13.436285 env[1605]: time="2025-05-17T00:42:13.436230900Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:42:13.436480 env[1605]: time="2025-05-17T00:42:13.436337273Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:42:13.436480 env[1605]: time="2025-05-17T00:42:13.436376171Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:42:13.436480 env[1605]: time="2025-05-17T00:42:13.436394689Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:42:13.438998 env[1605]: time="2025-05-17T00:42:13.438969158Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:42:13.439135 env[1605]: time="2025-05-17T00:42:13.439105106Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:42:13.439230 env[1605]: time="2025-05-17T00:42:13.439208842Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:42:13.439322 env[1605]: time="2025-05-17T00:42:13.439304274Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:42:14.008244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:42:14.008643 systemd[1]: Stopped kubelet.service. May 17 00:42:14.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:14.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:14.011260 systemd[1]: Starting kubelet.service... May 17 00:42:14.018341 env[1605]: time="2025-05-17T00:42:14.018279396Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:42:14.018594 env[1605]: time="2025-05-17T00:42:14.018561714Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:42:14.019177 env[1605]: time="2025-05-17T00:42:14.019137472Z" level=info msg="Loading containers: start." May 17 00:42:14.143000 audit[1639]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1639 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.143000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd20635550 a2=0 a3=7ffd2063553c items=0 ppid=1605 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:42:14.146000 audit[1641]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.146000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde2213f70 a2=0 a3=7ffde2213f5c items=0 ppid=1605 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:42:14.149000 audit[1643]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1643 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.149000 audit[1643]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc60be40d0 a2=0 a3=7ffc60be40bc items=0 ppid=1605 pid=1643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.149000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:42:14.152000 audit[1645]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1645 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.152000 audit[1645]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9e7c3190 a2=0 a3=7ffc9e7c317c items=0 ppid=1605 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.152000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:42:14.156000 audit[1647]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1647 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.156000 audit[1647]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff9cf47290 a2=0 a3=7fff9cf4727c items=0 ppid=1605 pid=1647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.156000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 00:42:14.202000 audit[1652]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.202000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc214b430 a2=0 a3=7ffcc214b41c items=0 ppid=1605 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.202000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 00:42:14.278000 audit[1654]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1654 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.278000 audit[1654]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe17727ef0 a2=0 a3=7ffe17727edc items=0 ppid=1605 pid=1654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.278000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 00:42:14.283000 audit[1656]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.283000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffee95a7110 a2=0 a3=7ffee95a70fc items=0 ppid=1605 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.283000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 00:42:14.288000 audit[1658]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1658 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.288000 audit[1658]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff733b8600 a2=0 a3=7fff733b85ec items=0 ppid=1605 pid=1658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.288000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:42:14.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:14.324762 systemd[1]: Started kubelet.service. May 17 00:42:14.391000 audit[1672]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.391000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd029ea450 a2=0 a3=7ffd029ea43c items=0 ppid=1605 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.391000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:42:14.394065 kubelet[1664]: E0517 00:42:14.394024 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:42:14.398000 audit[1673]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1673 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.398000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd312da720 a2=0 a3=7ffd312da70c items=0 ppid=1605 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.398000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:42:14.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:42:14.398536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:42:14.398888 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:42:14.419451 kernel: Initializing XFRM netlink socket May 17 00:42:14.467511 env[1605]: time="2025-05-17T00:42:14.467446025Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:42:14.502000 audit[1682]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.502000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc65625250 a2=0 a3=7ffc6562523c items=0 ppid=1605 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.502000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 00:42:14.514000 audit[1685]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.514000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd003c6620 a2=0 a3=7ffd003c660c items=0 ppid=1605 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.514000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 00:42:14.519000 audit[1688]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.519000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe41b9c180 a2=0 a3=7ffe41b9c16c items=0 ppid=1605 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.519000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 00:42:14.523000 audit[1690]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.523000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff9cd30200 a2=0 a3=7fff9cd301ec items=0 ppid=1605 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 00:42:14.527000 audit[1692]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1692 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.527000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe2356d460 a2=0 a3=7ffe2356d44c items=0 ppid=1605 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.527000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 00:42:14.530000 audit[1694]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.530000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd7523fe20 a2=0 a3=7ffd7523fe0c items=0 ppid=1605 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.530000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 00:42:14.534000 audit[1696]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.534000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffee193a440 a2=0 a3=7ffee193a42c items=0 ppid=1605 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.534000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 00:42:14.547000 audit[1699]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.547000 audit[1699]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffeabb0aa90 a2=0 a3=7ffeabb0aa7c items=0 ppid=1605 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.547000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 00:42:14.550000 audit[1701]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1701 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.550000 audit[1701]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff15b333b0 a2=0 a3=7fff15b3339c items=0 ppid=1605 pid=1701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.550000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:42:14.554000 audit[1703]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.554000 audit[1703]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffda3eda150 a2=0 a3=7ffda3eda13c items=0 ppid=1605 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.554000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:42:14.557000 audit[1705]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.557000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc66c0350 a2=0 a3=7ffdc66c033c items=0 ppid=1605 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.557000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 00:42:14.558494 systemd-networkd[1075]: docker0: Link UP May 17 00:42:14.574000 audit[1709]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1709 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.574000 audit[1709]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd181ef1c0 a2=0 a3=7ffd181ef1ac items=0 ppid=1605 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.574000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:42:14.579000 audit[1710]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:14.579000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff6303ebc0 a2=0 a3=7fff6303ebac items=0 ppid=1605 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:14.579000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:42:14.580734 env[1605]: time="2025-05-17T00:42:14.580676690Z" level=info msg="Loading containers: done." May 17 00:42:14.600371 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3655630418-merged.mount: Deactivated successfully. May 17 00:42:14.608858 env[1605]: time="2025-05-17T00:42:14.608801162Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:42:14.609155 env[1605]: time="2025-05-17T00:42:14.609111532Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:42:14.609315 env[1605]: time="2025-05-17T00:42:14.609285453Z" level=info msg="Daemon has completed initialization" May 17 00:42:14.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:14.631213 systemd[1]: Started docker.service. May 17 00:42:14.642151 env[1605]: time="2025-05-17T00:42:14.642077626Z" level=info msg="API listen on /run/docker.sock" May 17 00:42:15.703102 env[1335]: time="2025-05-17T00:42:15.703011094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:42:16.224120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037521390.mount: Deactivated successfully. May 17 00:42:17.874806 env[1335]: time="2025-05-17T00:42:17.874730921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.878079 env[1335]: time="2025-05-17T00:42:17.878021879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.880881 env[1335]: time="2025-05-17T00:42:17.880833100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.883366 env[1335]: time="2025-05-17T00:42:17.883322006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:17.884674 env[1335]: time="2025-05-17T00:42:17.884613282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:42:17.885539 env[1335]: time="2025-05-17T00:42:17.885502532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:42:19.529691 env[1335]: time="2025-05-17T00:42:19.529617530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.532704 env[1335]: time="2025-05-17T00:42:19.532654576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.536041 env[1335]: time="2025-05-17T00:42:19.535973792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.538911 env[1335]: time="2025-05-17T00:42:19.538853364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:19.540338 env[1335]: time="2025-05-17T00:42:19.540290301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:42:19.542230 env[1335]: time="2025-05-17T00:42:19.542186244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:42:20.850730 env[1335]: time="2025-05-17T00:42:20.850660462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:20.853777 env[1335]: time="2025-05-17T00:42:20.853719950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:20.856193 env[1335]: time="2025-05-17T00:42:20.856144705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:20.858535 env[1335]: time="2025-05-17T00:42:20.858489828Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:20.859755 env[1335]: time="2025-05-17T00:42:20.859699608Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:42:20.860695 env[1335]: time="2025-05-17T00:42:20.860660727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:42:21.958940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618521584.mount: Deactivated successfully. May 17 00:42:22.738703 env[1335]: time="2025-05-17T00:42:22.738623223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:22.741760 env[1335]: time="2025-05-17T00:42:22.741706740Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:22.744276 env[1335]: time="2025-05-17T00:42:22.744218897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:22.746728 env[1335]: time="2025-05-17T00:42:22.746668531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:22.747622 env[1335]: time="2025-05-17T00:42:22.747565199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:42:22.749055 env[1335]: time="2025-05-17T00:42:22.748990477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:42:23.181558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330259731.mount: Deactivated successfully. May 17 00:42:24.458180 env[1335]: time="2025-05-17T00:42:24.458076728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:24.461742 env[1335]: time="2025-05-17T00:42:24.461690516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:24.470570 env[1335]: time="2025-05-17T00:42:24.470504894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:24.471729 env[1335]: time="2025-05-17T00:42:24.471677706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:24.472978 env[1335]: time="2025-05-17T00:42:24.472918823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:42:24.473857 env[1335]: time="2025-05-17T00:42:24.473809682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:42:24.650185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:42:24.650606 systemd[1]: Stopped kubelet.service. May 17 00:42:24.679572 kernel: kauditd_printk_skb: 88 callbacks suppressed May 17 00:42:24.679738 kernel: audit: type=1130 audit(1747442544.649:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.653622 systemd[1]: Starting kubelet.service... May 17 00:42:24.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.703656 kernel: audit: type=1131 audit(1747442544.649:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.915293 systemd[1]: Started kubelet.service. May 17 00:42:24.939505 kernel: audit: type=1130 audit(1747442544.914:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:24.993122 kubelet[1752]: E0517 00:42:24.993036 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:42:24.995784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:42:24.996277 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:42:24.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:42:25.019490 kernel: audit: type=1131 audit(1747442544.995:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:42:25.104457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405105259.mount: Deactivated successfully. May 17 00:42:25.113902 env[1335]: time="2025-05-17T00:42:25.113837275Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:25.116549 env[1335]: time="2025-05-17T00:42:25.116501821Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:25.118783 env[1335]: time="2025-05-17T00:42:25.118741977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:25.120959 env[1335]: time="2025-05-17T00:42:25.120916722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:25.121770 env[1335]: time="2025-05-17T00:42:25.121710296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:42:25.122485 env[1335]: time="2025-05-17T00:42:25.122450587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:42:25.499034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350263932.mount: Deactivated successfully. May 17 00:42:28.085628 env[1335]: time="2025-05-17T00:42:28.085544982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:28.088925 env[1335]: time="2025-05-17T00:42:28.088873419Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:28.092482 env[1335]: time="2025-05-17T00:42:28.092413475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:28.095522 env[1335]: time="2025-05-17T00:42:28.095479619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:28.097150 env[1335]: time="2025-05-17T00:42:28.097101850Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:42:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:30.957946 systemd[1]: Stopped kubelet.service. May 17 00:42:30.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:30.981723 systemd[1]: Starting kubelet.service... May 17 00:42:31.003264 kernel: audit: type=1130 audit(1747442550.956:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.003374 kernel: audit: type=1131 audit(1747442550.963:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.038757 systemd[1]: Reloading. May 17 00:42:31.153717 /usr/lib/systemd/system-generators/torcx-generator[1804]: time="2025-05-17T00:42:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:31.153775 /usr/lib/systemd/system-generators/torcx-generator[1804]: time="2025-05-17T00:42:31Z" level=info msg="torcx already run" May 17 00:42:31.362753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:31.362785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:31.387658 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:31.514874 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:42:31.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.538447 kernel: audit: type=1131 audit(1747442551.514:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.551385 systemd[1]: Started kubelet.service. May 17 00:42:31.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.575950 kernel: audit: type=1130 audit(1747442551.551:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.584086 systemd[1]: Stopping kubelet.service... May 17 00:42:31.589007 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:42:31.589469 systemd[1]: Stopped kubelet.service. May 17 00:42:31.594435 systemd[1]: Starting kubelet.service... May 17 00:42:31.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.625593 kernel: audit: type=1131 audit(1747442551.588:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.931075 systemd[1]: Started kubelet.service. May 17 00:42:31.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:31.957680 kernel: audit: type=1130 audit(1747442551.930:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:32.012939 kubelet[1873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:32.012939 kubelet[1873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:42:32.012939 kubelet[1873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:32.013703 kubelet[1873]: I0517 00:42:32.013041 1873 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:32.483736 kubelet[1873]: I0517 00:42:32.483673 1873 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:42:32.483736 kubelet[1873]: I0517 00:42:32.483713 1873 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:32.484912 kubelet[1873]: I0517 00:42:32.484874 1873 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:42:32.562158 kubelet[1873]: E0517 00:42:32.562096 1873 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:32.563015 kubelet[1873]: I0517 00:42:32.562963 1873 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:32.575210 kubelet[1873]: E0517 00:42:32.575134 1873 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:32.575210 kubelet[1873]: I0517 00:42:32.575195 1873 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:32.581972 kubelet[1873]: I0517 00:42:32.581934 1873 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:32.583847 kubelet[1873]: I0517 00:42:32.583790 1873 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:42:32.584111 kubelet[1873]: I0517 00:42:32.584046 1873 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:32.584387 kubelet[1873]: I0517 00:42:32.584099 1873 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:42:32.584636 kubelet[1873]: I0517 00:42:32.584392 1873 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:32.584636 kubelet[1873]: I0517 00:42:32.584412 1873 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:42:32.584636 kubelet[1873]: I0517 00:42:32.584614 1873 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:32.593969 kubelet[1873]: I0517 00:42:32.593927 1873 kubelet.go:408] "Attempting to sync node with API server" May 17 00:42:32.594104 kubelet[1873]: I0517 00:42:32.593975 1873 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:32.594104 kubelet[1873]: I0517 00:42:32.594035 1873 kubelet.go:314] "Adding apiserver pod source" May 17 00:42:32.594104 kubelet[1873]: I0517 00:42:32.594063 1873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:32.599785 kubelet[1873]: W0517 00:42:32.599691 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260&limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:32.599925 kubelet[1873]: E0517 00:42:32.599781 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260&limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:32.602315 kubelet[1873]: I0517 00:42:32.602291 1873 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:32.603122 kubelet[1873]: I0517 00:42:32.603096 1873 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:42:32.604671 kubelet[1873]: W0517 00:42:32.604643 1873 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:42:32.608997 kubelet[1873]: I0517 00:42:32.608947 1873 server.go:1274] "Started kubelet" May 17 00:42:32.609209 kubelet[1873]: W0517 00:42:32.609149 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:32.609312 kubelet[1873]: E0517 00:42:32.609219 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:32.623000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:32.641755 kubelet[1873]: I0517 00:42:32.625272 1873 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:42:32.641755 kubelet[1873]: I0517 00:42:32.625340 1873 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:42:32.641755 kubelet[1873]: I0517 00:42:32.625479 1873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:32.647465 kernel: audit: type=1400 audit(1747442552.623:198): avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:32.623000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:32.659461 kernel: audit: type=1401 audit(1747442552.623:198): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:32.623000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000962e40 a1=c0008d3710 a2=c000962e10 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.663232 kubelet[1873]: I0517 00:42:32.663133 1873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:32.665488 kubelet[1873]: I0517 00:42:32.665456 1873 server.go:449] "Adding debug handlers to kubelet server" May 17 00:42:32.691457 kernel: audit: type=1300 audit(1747442552.623:198): arch=c000003e syscall=188 success=no exit=-22 a0=c000962e40 a1=c0008d3710 a2=c000962e10 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:32.707651 kubelet[1873]: I0517 00:42:32.692325 1873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:32.707651 kubelet[1873]: I0517 00:42:32.692860 1873 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:32.707651 kubelet[1873]: I0517 00:42:32.693286 1873 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:32.707651 kubelet[1873]: I0517 00:42:32.697239 1873 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:42:32.707651 kubelet[1873]: E0517 00:42:32.697633 1873 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" not found" May 17 00:42:32.707651 kubelet[1873]: I0517 00:42:32.703393 1873 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:42:32.707651 kubelet[1873]: I0517 00:42:32.703522 1873 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:32.721470 kernel: audit: type=1327 audit(1747442552.623:198): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:32.623000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:32.623000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:32.623000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008a3780 a1=c0008d3728 a2=c000962ed0 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:32.631000 audit[1885]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.725978 kubelet[1873]: E0517 00:42:32.723655 1873 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260.184029be45a80cba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,UID:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,},FirstTimestamp:2025-05-17 00:42:32.608918714 +0000 UTC m=+0.663262065,LastTimestamp:2025-05-17 00:42:32.608918714 +0000 UTC m=+0.663262065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,}" May 17 00:42:32.728583 kubelet[1873]: I0517 00:42:32.728553 1873 factory.go:221] Registration of the systemd container factory successfully May 17 00:42:32.631000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffff190320 a2=0 a3=7fffff19030c items=0 ppid=1873 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.631000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:42:32.636000 audit[1886]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.636000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbcdfb820 a2=0 a3=7ffcbcdfb80c items=0 ppid=1873 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:42:32.698000 audit[1888]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.698000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd6ef47ec0 a2=0 a3=7ffd6ef47eac items=0 ppid=1873 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:32.703000 audit[1890]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.703000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd9672b2e0 a2=0 a3=7ffd9672b2cc items=0 ppid=1873 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:32.730152 kubelet[1873]: I0517 00:42:32.728890 1873 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:32.732962 kubelet[1873]: W0517 00:42:32.732292 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:32.733171 kubelet[1873]: E0517 00:42:32.733139 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:32.737816 kubelet[1873]: E0517 00:42:32.734492 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260?timeout=10s\": dial tcp 10.128.0.56:6443: connect: connection refused" interval="200ms" May 17 00:42:32.738003 kubelet[1873]: I0517 00:42:32.736997 1873 factory.go:221] Registration of the containerd container factory successfully May 17 00:42:32.743661 kubelet[1873]: E0517 00:42:32.737709 1873 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:32.750000 audit[1895]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.750000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffef8bd46e0 a2=0 a3=7ffef8bd46cc items=0 ppid=1873 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 00:42:32.753471 kubelet[1873]: I0517 00:42:32.753385 1873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:42:32.754000 audit[1899]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.754000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5a59de40 a2=0 a3=7ffc5a59de2c items=0 ppid=1873 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:42:32.762000 audit[1898]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:32.762000 audit[1898]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb64e3dc0 a2=0 a3=7ffcb64e3dac items=0 ppid=1873 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:42:32.765000 audit[1901]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.765000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd8473770 a2=0 a3=7ffcd847375c items=0 ppid=1873 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:42:32.768611 kubelet[1873]: I0517 00:42:32.768569 1873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:42:32.768736 kubelet[1873]: I0517 00:42:32.768618 1873 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:42:32.768736 kubelet[1873]: I0517 00:42:32.768663 1873 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:42:32.768846 kubelet[1873]: E0517 00:42:32.768729 1873 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:32.768000 audit[1902]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:32.768000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5250dfe0 a2=0 a3=7ffe5250dfcc items=0 ppid=1873 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:42:32.773646 kubelet[1873]: W0517 00:42:32.773562 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:32.773774 kubelet[1873]: E0517 00:42:32.773674 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:32.775000 audit[1903]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:32.775000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee3da37d0 a2=0 a3=7ffee3da37bc items=0 ppid=1873 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:42:32.778000 audit[1904]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:32.778000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc74612440 a2=0 a3=7ffc7461242c items=0 ppid=1873 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:42:32.780000 audit[1905]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:32.780000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb74d1660 a2=0 a3=7ffdb74d164c items=0 ppid=1873 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:42:32.785375 kubelet[1873]: I0517 00:42:32.785348 1873 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:42:32.785588 kubelet[1873]: I0517 00:42:32.785567 1873 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:42:32.785780 kubelet[1873]: I0517 00:42:32.785764 1873 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:32.788799 kubelet[1873]: I0517 00:42:32.788779 1873 policy_none.go:49] "None policy: Start" May 17 00:42:32.790087 kubelet[1873]: I0517 00:42:32.790067 1873 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:42:32.790227 kubelet[1873]: I0517 00:42:32.790212 1873 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:32.797988 kubelet[1873]: E0517 00:42:32.797958 1873 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" not found" May 17 00:42:32.799003 kubelet[1873]: I0517 00:42:32.798975 1873 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:42:32.797000 audit[1873]: AVC avc: denied { mac_admin } for pid=1873 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:32.797000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:32.797000 audit[1873]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ec7470 a1=c000ec0f90 a2=c000ec7440 a3=25 items=0 ppid=1 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:32.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:32.799797 kubelet[1873]: I0517 00:42:32.799770 1873 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:42:32.800089 kubelet[1873]: I0517 00:42:32.800067 1873 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:32.800251 kubelet[1873]: I0517 00:42:32.800202 1873 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:32.802775 kubelet[1873]: I0517 00:42:32.802752 1873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:32.804830 kubelet[1873]: E0517 00:42:32.804804 1873 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" not found" May 17 00:42:32.905280 kubelet[1873]: I0517 00:42:32.904910 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905280 kubelet[1873]: I0517 00:42:32.905031 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b49af39a50a49e37ff9c5cba22f92d1e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"b49af39a50a49e37ff9c5cba22f92d1e\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905280 kubelet[1873]: I0517 00:42:32.905099 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b49af39a50a49e37ff9c5cba22f92d1e-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"b49af39a50a49e37ff9c5cba22f92d1e\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905280 kubelet[1873]: I0517 00:42:32.905181 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b49af39a50a49e37ff9c5cba22f92d1e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"b49af39a50a49e37ff9c5cba22f92d1e\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905680 kubelet[1873]: I0517 00:42:32.905257 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905680 kubelet[1873]: I0517 00:42:32.905297 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905680 kubelet[1873]: I0517 00:42:32.905359 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905680 kubelet[1873]: I0517 00:42:32.905417 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905922 kubelet[1873]: I0517 00:42:32.905480 1873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17c115f86821aab2ef9e8b836dbfe646-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"17c115f86821aab2ef9e8b836dbfe646\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.905922 kubelet[1873]: I0517 00:42:32.905836 1873 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.906336 kubelet[1873]: E0517 00:42:32.906301 1873 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.56:6443/api/v1/nodes\": dial tcp 10.128.0.56:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:32.939118 kubelet[1873]: E0517 00:42:32.939052 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260?timeout=10s\": dial tcp 10.128.0.56:6443: connect: connection refused" interval="400ms" May 17 00:42:33.114309 kubelet[1873]: I0517 00:42:33.111272 1873 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:33.114309 kubelet[1873]: E0517 00:42:33.111686 1873 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.56:6443/api/v1/nodes\": dial tcp 10.128.0.56:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:33.184892 env[1335]: time="2025-05-17T00:42:33.184813363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,Uid:d29b2fa7ef8343b6dda95df6df2d4329,Namespace:kube-system,Attempt:0,}" May 17 00:42:33.189796 env[1335]: time="2025-05-17T00:42:33.189701950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,Uid:17c115f86821aab2ef9e8b836dbfe646,Namespace:kube-system,Attempt:0,}" May 17 00:42:33.198957 env[1335]: time="2025-05-17T00:42:33.198902425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,Uid:b49af39a50a49e37ff9c5cba22f92d1e,Namespace:kube-system,Attempt:0,}" May 17 00:42:33.340687 kubelet[1873]: E0517 00:42:33.340620 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260?timeout=10s\": dial tcp 10.128.0.56:6443: connect: connection refused" interval="800ms" May 17 00:42:33.518975 kubelet[1873]: I0517 00:42:33.518827 1873 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:33.519965 kubelet[1873]: E0517 00:42:33.519891 1873 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.56:6443/api/v1/nodes\": dial tcp 10.128.0.56:6443: connect: connection refused" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:33.545610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153691272.mount: Deactivated successfully. May 17 00:42:33.555135 env[1335]: time="2025-05-17T00:42:33.555077881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.556630 env[1335]: time="2025-05-17T00:42:33.556581207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.560220 env[1335]: time="2025-05-17T00:42:33.560154650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.562885 env[1335]: time="2025-05-17T00:42:33.562843517Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.566956 env[1335]: time="2025-05-17T00:42:33.566914966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.568299 env[1335]: time="2025-05-17T00:42:33.568231442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.569804 env[1335]: time="2025-05-17T00:42:33.569753171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.572608 env[1335]: time="2025-05-17T00:42:33.572567284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.576342 env[1335]: time="2025-05-17T00:42:33.576285887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.578228 env[1335]: time="2025-05-17T00:42:33.578171029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.581098 env[1335]: time="2025-05-17T00:42:33.581039469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.585625 env[1335]: time="2025-05-17T00:42:33.585575984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:33.643654 env[1335]: time="2025-05-17T00:42:33.643563354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:33.643847 env[1335]: time="2025-05-17T00:42:33.643691053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:33.643847 env[1335]: time="2025-05-17T00:42:33.643760021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:33.644130 env[1335]: time="2025-05-17T00:42:33.644058404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a64790c58317a6de29c1edfde6b7f3e82b7981455f59bb1b8cca9bf96a93c228 pid=1925 runtime=io.containerd.runc.v2 May 17 00:42:33.646707 env[1335]: time="2025-05-17T00:42:33.646612890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:33.646962 env[1335]: time="2025-05-17T00:42:33.646901469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:33.647188 env[1335]: time="2025-05-17T00:42:33.647122043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:33.647643 env[1335]: time="2025-05-17T00:42:33.647575448Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43dc8b526a7c76a3fdab82c383567b0fbbce0f1c0cb51f75f0f385551a6d1be9 pid=1927 runtime=io.containerd.runc.v2 May 17 00:42:33.651759 env[1335]: time="2025-05-17T00:42:33.651653412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:33.651759 env[1335]: time="2025-05-17T00:42:33.651706908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:33.651985 env[1335]: time="2025-05-17T00:42:33.651749308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:33.652219 env[1335]: time="2025-05-17T00:42:33.652146126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e0ba28f0629e6cbecc33281535a3afafc4407d7e7bc22e553741de6f6fabecd pid=1930 runtime=io.containerd.runc.v2 May 17 00:42:33.717611 kubelet[1873]: W0517 00:42:33.717415 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:33.717611 kubelet[1873]: E0517 00:42:33.717539 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:33.803653 env[1335]: time="2025-05-17T00:42:33.802804490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,Uid:d29b2fa7ef8343b6dda95df6df2d4329,Namespace:kube-system,Attempt:0,} returns sandbox id \"43dc8b526a7c76a3fdab82c383567b0fbbce0f1c0cb51f75f0f385551a6d1be9\"" May 17 00:42:33.806152 kubelet[1873]: W0517 00:42:33.806003 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:33.806152 kubelet[1873]: E0517 00:42:33.806099 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:33.807658 kubelet[1873]: E0517 00:42:33.807097 1873 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8" May 17 00:42:33.809393 env[1335]: time="2025-05-17T00:42:33.809350038Z" level=info msg="CreateContainer within sandbox \"43dc8b526a7c76a3fdab82c383567b0fbbce0f1c0cb51f75f0f385551a6d1be9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:42:33.835222 env[1335]: time="2025-05-17T00:42:33.835161966Z" level=info msg="CreateContainer within sandbox \"43dc8b526a7c76a3fdab82c383567b0fbbce0f1c0cb51f75f0f385551a6d1be9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f07c7f3852c576b6e6c4a2346f8e127f9ec50ef4113196219848e15db7ac9473\"" May 17 00:42:33.836393 env[1335]: time="2025-05-17T00:42:33.836349709Z" level=info msg="StartContainer for \"f07c7f3852c576b6e6c4a2346f8e127f9ec50ef4113196219848e15db7ac9473\"" May 17 00:42:33.838545 env[1335]: time="2025-05-17T00:42:33.838493568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,Uid:17c115f86821aab2ef9e8b836dbfe646,Namespace:kube-system,Attempt:0,} returns sandbox id \"a64790c58317a6de29c1edfde6b7f3e82b7981455f59bb1b8cca9bf96a93c228\"" May 17 00:42:33.840455 kubelet[1873]: E0517 00:42:33.840267 1873 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af323" May 17 00:42:33.842100 env[1335]: time="2025-05-17T00:42:33.842059363Z" level=info msg="CreateContainer within sandbox \"a64790c58317a6de29c1edfde6b7f3e82b7981455f59bb1b8cca9bf96a93c228\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:42:33.865634 env[1335]: time="2025-05-17T00:42:33.865567800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260,Uid:b49af39a50a49e37ff9c5cba22f92d1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e0ba28f0629e6cbecc33281535a3afafc4407d7e7bc22e553741de6f6fabecd\"" May 17 00:42:33.866019 env[1335]: time="2025-05-17T00:42:33.865974400Z" level=info msg="CreateContainer within sandbox \"a64790c58317a6de29c1edfde6b7f3e82b7981455f59bb1b8cca9bf96a93c228\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa83592b34ead539417093f438f8bafba757effe16d20e84379da4170aaac16f\"" May 17 00:42:33.866941 env[1335]: time="2025-05-17T00:42:33.866810941Z" level=info msg="StartContainer for \"aa83592b34ead539417093f438f8bafba757effe16d20e84379da4170aaac16f\"" May 17 00:42:33.868210 kubelet[1873]: W0517 00:42:33.868061 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260&limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:33.868210 kubelet[1873]: E0517 00:42:33.868165 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260&limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:33.869088 kubelet[1873]: E0517 00:42:33.868664 1873 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af323" May 17 00:42:33.870380 env[1335]: time="2025-05-17T00:42:33.870343137Z" level=info msg="CreateContainer within sandbox \"4e0ba28f0629e6cbecc33281535a3afafc4407d7e7bc22e553741de6f6fabecd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:42:33.880924 kubelet[1873]: W0517 00:42:33.880738 1873 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.56:6443: connect: connection refused May 17 00:42:33.880924 kubelet[1873]: E0517 00:42:33.880877 1873 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.56:6443: connect: connection refused" logger="UnhandledError" May 17 00:42:33.910532 env[1335]: time="2025-05-17T00:42:33.910468400Z" level=info msg="CreateContainer within sandbox \"4e0ba28f0629e6cbecc33281535a3afafc4407d7e7bc22e553741de6f6fabecd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"832dc12fb284341f38c4a4a4c9301f92b7052a51aee1b814a6853069cf1f2ad6\"" May 17 00:42:33.911661 env[1335]: time="2025-05-17T00:42:33.911616193Z" level=info msg="StartContainer for \"832dc12fb284341f38c4a4a4c9301f92b7052a51aee1b814a6853069cf1f2ad6\"" May 17 00:42:34.017924 env[1335]: time="2025-05-17T00:42:34.017857021Z" level=info msg="StartContainer for \"f07c7f3852c576b6e6c4a2346f8e127f9ec50ef4113196219848e15db7ac9473\" returns successfully" May 17 00:42:34.142457 kubelet[1873]: E0517 00:42:34.141392 1873 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260?timeout=10s\": dial tcp 10.128.0.56:6443: connect: connection refused" interval="1.6s" May 17 00:42:34.144792 env[1335]: time="2025-05-17T00:42:34.144701695Z" level=info msg="StartContainer for \"832dc12fb284341f38c4a4a4c9301f92b7052a51aee1b814a6853069cf1f2ad6\" returns successfully" May 17 00:42:34.150467 env[1335]: time="2025-05-17T00:42:34.148364012Z" level=info msg="StartContainer for \"aa83592b34ead539417093f438f8bafba757effe16d20e84379da4170aaac16f\" returns successfully" May 17 00:42:34.325676 kubelet[1873]: I0517 00:42:34.325578 1873 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:37.618782 kubelet[1873]: I0517 00:42:37.618740 1873 apiserver.go:52] "Watching apiserver" May 17 00:42:37.703862 kubelet[1873]: I0517 00:42:37.703817 1873 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:42:37.738255 kubelet[1873]: E0517 00:42:37.738202 1873 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" not found" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:37.811755 kubelet[1873]: I0517 00:42:37.811716 1873 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:37.812027 kubelet[1873]: E0517 00:42:37.812000 1873 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\": node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" not found" May 17 00:42:39.949242 systemd[1]: Reloading. May 17 00:42:40.074710 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2025-05-17T00:42:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:40.075387 /usr/lib/systemd/system-generators/torcx-generator[2164]: time="2025-05-17T00:42:40Z" level=info msg="torcx already run" May 17 00:42:40.195896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:40.195922 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:40.223156 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:40.371110 systemd[1]: Stopping kubelet.service... May 17 00:42:40.394402 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:42:40.395021 systemd[1]: Stopped kubelet.service. May 17 00:42:40.401818 kernel: kauditd_printk_skb: 44 callbacks suppressed May 17 00:42:40.401922 kernel: audit: type=1131 audit(1747442560.394:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.400544 systemd[1]: Starting kubelet.service... May 17 00:42:40.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.683071 systemd[1]: Started kubelet.service. May 17 00:42:40.706456 kernel: audit: type=1130 audit(1747442560.683:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:40.792696 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:40.793591 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:42:40.793932 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:40.794323 kubelet[2220]: I0517 00:42:40.794261 2220 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:40.810228 kubelet[2220]: I0517 00:42:40.810188 2220 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:42:40.810393 kubelet[2220]: I0517 00:42:40.810376 2220 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:40.810835 kubelet[2220]: I0517 00:42:40.810792 2220 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:42:40.812477 kubelet[2220]: I0517 00:42:40.812385 2220 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:42:40.814864 kubelet[2220]: I0517 00:42:40.814839 2220 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:40.819919 kubelet[2220]: E0517 00:42:40.819864 2220 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:40.819919 kubelet[2220]: I0517 00:42:40.819900 2220 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:40.823911 kubelet[2220]: I0517 00:42:40.823868 2220 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:40.824506 kubelet[2220]: I0517 00:42:40.824473 2220 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:42:40.824739 kubelet[2220]: I0517 00:42:40.824685 2220 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:40.824971 kubelet[2220]: I0517 00:42:40.824730 2220 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:42:40.825158 kubelet[2220]: I0517 00:42:40.824983 2220 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:40.825158 kubelet[2220]: I0517 00:42:40.825001 2220 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:42:40.825158 kubelet[2220]: I0517 00:42:40.825077 2220 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:40.825344 kubelet[2220]: I0517 00:42:40.825235 2220 kubelet.go:408] "Attempting to sync node with API server" May 17 00:42:40.825344 kubelet[2220]: I0517 00:42:40.825257 2220 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:40.825344 kubelet[2220]: I0517 00:42:40.825297 2220 kubelet.go:314] "Adding apiserver pod source" May 17 00:42:40.825344 kubelet[2220]: I0517 00:42:40.825313 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:40.827965 kubelet[2220]: I0517 00:42:40.827932 2220 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:40.829291 kubelet[2220]: I0517 00:42:40.829258 2220 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:42:40.843000 audit[2220]: AVC avc: denied { mac_admin } for pid=2220 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:40.844555 kubelet[2220]: I0517 00:42:40.840397 2220 server.go:1274] "Started kubelet" May 17 00:42:40.859271 kubelet[2220]: I0517 00:42:40.859188 2220 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:40.859958 kubelet[2220]: I0517 00:42:40.859912 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:40.860613 kubelet[2220]: I0517 00:42:40.860587 2220 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:40.862493 kubelet[2220]: I0517 00:42:40.862462 2220 server.go:449] "Adding debug handlers to kubelet server" May 17 00:42:40.843000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:40.871377 kubelet[2220]: I0517 00:42:40.871318 2220 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:42:40.876982 kernel: audit: type=1400 audit(1747442560.843:215): avc: denied { mac_admin } for pid=2220 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:40.877073 kernel: audit: type=1401 audit(1747442560.843:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:40.843000 audit[2220]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000be58f0 a1=c00066d5c0 a2=c000be58c0 a3=25 items=0 ppid=1 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:40.910138 kubelet[2220]: I0517 00:42:40.898013 2220 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:42:40.910138 kubelet[2220]: I0517 00:42:40.898073 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:40.910138 kubelet[2220]: I0517 00:42:40.908394 2220 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:40.910446 kernel: audit: type=1300 audit(1747442560.843:215): arch=c000003e syscall=188 success=no exit=-22 a0=c000be58f0 a1=c00066d5c0 a2=c000be58c0 a3=25 items=0 ppid=1 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:40.914776 kubelet[2220]: I0517 00:42:40.914742 2220 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:42:40.919476 kubelet[2220]: I0517 00:42:40.919435 2220 factory.go:221] Registration of the systemd container factory successfully May 17 00:42:40.919996 kubelet[2220]: I0517 00:42:40.919931 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:40.922964 kubelet[2220]: I0517 00:42:40.922885 2220 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:42:40.923096 kubelet[2220]: I0517 00:42:40.923082 2220 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:40.923674 kubelet[2220]: I0517 00:42:40.923648 2220 factory.go:221] Registration of the containerd container factory successfully May 17 00:42:40.931733 kubelet[2220]: I0517 00:42:40.929387 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:42:40.931733 kubelet[2220]: I0517 00:42:40.931326 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:42:40.931733 kubelet[2220]: I0517 00:42:40.931352 2220 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:42:40.931733 kubelet[2220]: I0517 00:42:40.931378 2220 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:42:40.931733 kubelet[2220]: E0517 00:42:40.931465 2220 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:40.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:40.977781 kernel: audit: type=1327 audit(1747442560.843:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:40.982115 kubelet[2220]: E0517 00:42:40.953022 2220 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:40.897000 audit[2220]: AVC avc: denied { mac_admin } for pid=2220 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:41.005595 kernel: audit: type=1400 audit(1747442560.897:216): avc: denied { mac_admin } for pid=2220 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:41.005734 kernel: audit: type=1401 audit(1747442560.897:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:40.897000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:40.897000 audit[2220]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00069afa0 a1=c000bfc720 a2=c000753a10 a3=25 items=0 ppid=1 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:41.033812 kubelet[2220]: E0517 00:42:41.033120 2220 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:42:41.047847 kernel: audit: type=1300 audit(1747442560.897:216): arch=c000003e syscall=188 success=no exit=-22 a0=c00069afa0 a1=c000bfc720 a2=c000753a10 a3=25 items=0 ppid=1 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:41.076934 kernel: audit: type=1327 audit(1747442560.897:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:40.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:41.151866 kubelet[2220]: I0517 00:42:41.151820 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:42:41.151866 kubelet[2220]: I0517 00:42:41.151846 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:42:41.152102 kubelet[2220]: I0517 00:42:41.151902 2220 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:41.152281 kubelet[2220]: I0517 00:42:41.152248 2220 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:42:41.152360 kubelet[2220]: I0517 00:42:41.152274 2220 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:42:41.152360 kubelet[2220]: I0517 00:42:41.152329 2220 policy_none.go:49] "None policy: Start" May 17 00:42:41.153789 kubelet[2220]: I0517 00:42:41.153761 2220 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:42:41.153993 kubelet[2220]: I0517 00:42:41.153975 2220 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:41.154414 kubelet[2220]: I0517 00:42:41.154378 2220 state_mem.go:75] "Updated machine memory state" May 17 00:42:41.160000 audit[2220]: AVC avc: denied { mac_admin } for pid=2220 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:42:41.160000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:42:41.160000 audit[2220]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00103f410 a1=c00103cc00 a2=c00103f3e0 a3=25 items=0 ppid=1 pid=2220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:41.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:42:41.162041 kubelet[2220]: I0517 00:42:41.160885 2220 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:42:41.162041 kubelet[2220]: I0517 00:42:41.160998 2220 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:42:41.162041 kubelet[2220]: I0517 00:42:41.161276 2220 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:41.162041 kubelet[2220]: I0517 00:42:41.161317 2220 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:41.162041 kubelet[2220]: I0517 00:42:41.161890 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:41.260099 kubelet[2220]: W0517 00:42:41.259951 2220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:42:41.272385 kubelet[2220]: W0517 00:42:41.272337 2220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:42:41.272774 kubelet[2220]: W0517 00:42:41.272746 2220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:42:41.284979 kubelet[2220]: I0517 00:42:41.284935 2220 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.298055 kubelet[2220]: I0517 00:42:41.298018 2220 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.298384 kubelet[2220]: I0517 00:42:41.298361 2220 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.339306 kubelet[2220]: I0517 00:42:41.339248 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.339718 kubelet[2220]: I0517 00:42:41.339681 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17c115f86821aab2ef9e8b836dbfe646-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"17c115f86821aab2ef9e8b836dbfe646\") " pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.339948 kubelet[2220]: I0517 00:42:41.339907 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b49af39a50a49e37ff9c5cba22f92d1e-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"b49af39a50a49e37ff9c5cba22f92d1e\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.340121 kubelet[2220]: I0517 00:42:41.340094 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.340300 kubelet[2220]: I0517 00:42:41.340269 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.340573 kubelet[2220]: I0517 00:42:41.340546 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.340776 kubelet[2220]: I0517 00:42:41.340740 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b49af39a50a49e37ff9c5cba22f92d1e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"b49af39a50a49e37ff9c5cba22f92d1e\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.340957 kubelet[2220]: I0517 00:42:41.340931 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b49af39a50a49e37ff9c5cba22f92d1e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"b49af39a50a49e37ff9c5cba22f92d1e\") " pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.341210 kubelet[2220]: I0517 00:42:41.341167 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d29b2fa7ef8343b6dda95df6df2d4329-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" (UID: \"d29b2fa7ef8343b6dda95df6df2d4329\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:41.826578 kubelet[2220]: I0517 00:42:41.826528 2220 apiserver.go:52] "Watching apiserver" May 17 00:42:41.924151 kubelet[2220]: I0517 00:42:41.924074 2220 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:42:42.078365 kubelet[2220]: W0517 00:42:42.078231 2220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] May 17 00:42:42.078365 kubelet[2220]: E0517 00:42:42.078325 2220 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:42:42.119945 kubelet[2220]: I0517 00:42:42.119856 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" podStartSLOduration=1.119814118 podStartE2EDuration="1.119814118s" podCreationTimestamp="2025-05-17 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:42.106865649 +0000 UTC m=+1.391195168" watchObservedRunningTime="2025-05-17 00:42:42.119814118 +0000 UTC m=+1.404143635" May 17 00:42:42.120197 kubelet[2220]: I0517 00:42:42.120030 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" podStartSLOduration=1.120017883 podStartE2EDuration="1.120017883s" podCreationTimestamp="2025-05-17 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:42.119567774 +0000 UTC m=+1.403897290" watchObservedRunningTime="2025-05-17 00:42:42.120017883 +0000 UTC m=+1.404347402" May 17 00:42:44.511997 kubelet[2220]: I0517 00:42:44.511908 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" podStartSLOduration=3.511877438 podStartE2EDuration="3.511877438s" podCreationTimestamp="2025-05-17 00:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:42.13199759 +0000 UTC m=+1.416327107" watchObservedRunningTime="2025-05-17 00:42:44.511877438 +0000 UTC m=+3.796206956" May 17 00:42:45.764948 update_engine[1319]: I0517 00:42:45.764508 1319 update_attempter.cc:509] Updating boot flags... May 17 00:42:45.930948 kubelet[2220]: I0517 00:42:45.930904 2220 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:42:45.931673 env[1335]: time="2025-05-17T00:42:45.931619634Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:42:45.932203 kubelet[2220]: I0517 00:42:45.932074 2220 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:42:46.883229 kubelet[2220]: I0517 00:42:46.883157 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58d98b43-4e5a-47fd-abc9-868d2ba0dde7-xtables-lock\") pod \"kube-proxy-66hcq\" (UID: \"58d98b43-4e5a-47fd-abc9-868d2ba0dde7\") " pod="kube-system/kube-proxy-66hcq" May 17 00:42:46.883229 kubelet[2220]: I0517 00:42:46.883221 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58d98b43-4e5a-47fd-abc9-868d2ba0dde7-lib-modules\") pod \"kube-proxy-66hcq\" (UID: \"58d98b43-4e5a-47fd-abc9-868d2ba0dde7\") " pod="kube-system/kube-proxy-66hcq" May 17 00:42:46.883570 kubelet[2220]: I0517 00:42:46.883261 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdv59\" (UniqueName: \"kubernetes.io/projected/58d98b43-4e5a-47fd-abc9-868d2ba0dde7-kube-api-access-zdv59\") pod \"kube-proxy-66hcq\" (UID: \"58d98b43-4e5a-47fd-abc9-868d2ba0dde7\") " pod="kube-system/kube-proxy-66hcq" May 17 00:42:46.883570 kubelet[2220]: I0517 00:42:46.883299 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58d98b43-4e5a-47fd-abc9-868d2ba0dde7-kube-proxy\") pod \"kube-proxy-66hcq\" (UID: \"58d98b43-4e5a-47fd-abc9-868d2ba0dde7\") " pod="kube-system/kube-proxy-66hcq" May 17 00:42:46.993657 kubelet[2220]: I0517 00:42:46.993599 2220 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:42:47.084915 kubelet[2220]: I0517 00:42:47.084825 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm5zk\" (UniqueName: \"kubernetes.io/projected/6f889f17-ed15-47c4-875f-05363e204fa3-kube-api-access-tm5zk\") pod \"tigera-operator-7c5755cdcb-g7f94\" (UID: \"6f889f17-ed15-47c4-875f-05363e204fa3\") " pod="tigera-operator/tigera-operator-7c5755cdcb-g7f94" May 17 00:42:47.084915 kubelet[2220]: I0517 00:42:47.084906 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6f889f17-ed15-47c4-875f-05363e204fa3-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-g7f94\" (UID: \"6f889f17-ed15-47c4-875f-05363e204fa3\") " pod="tigera-operator/tigera-operator-7c5755cdcb-g7f94" May 17 00:42:47.120057 env[1335]: time="2025-05-17T00:42:47.119984049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66hcq,Uid:58d98b43-4e5a-47fd-abc9-868d2ba0dde7,Namespace:kube-system,Attempt:0,}" May 17 00:42:47.149306 env[1335]: time="2025-05-17T00:42:47.149106387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:47.149558 env[1335]: time="2025-05-17T00:42:47.149161018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:47.149558 env[1335]: time="2025-05-17T00:42:47.149209403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:47.150478 env[1335]: time="2025-05-17T00:42:47.150387459Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/291882d0d9c6948fbac3a02140869eabdd11b3664f4b5e95c290d9f768fedd5b pid=2291 runtime=io.containerd.runc.v2 May 17 00:42:47.197689 systemd[1]: run-containerd-runc-k8s.io-291882d0d9c6948fbac3a02140869eabdd11b3664f4b5e95c290d9f768fedd5b-runc.xrthJR.mount: Deactivated successfully. May 17 00:42:47.242653 env[1335]: time="2025-05-17T00:42:47.241903992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66hcq,Uid:58d98b43-4e5a-47fd-abc9-868d2ba0dde7,Namespace:kube-system,Attempt:0,} returns sandbox id \"291882d0d9c6948fbac3a02140869eabdd11b3664f4b5e95c290d9f768fedd5b\"" May 17 00:42:47.248214 env[1335]: time="2025-05-17T00:42:47.248153759Z" level=info msg="CreateContainer within sandbox \"291882d0d9c6948fbac3a02140869eabdd11b3664f4b5e95c290d9f768fedd5b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:42:47.268393 env[1335]: time="2025-05-17T00:42:47.268333430Z" level=info msg="CreateContainer within sandbox \"291882d0d9c6948fbac3a02140869eabdd11b3664f4b5e95c290d9f768fedd5b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e09017ce495447117708f5040f36914a701e3b25653d708a4a1a5352377d5738\"" May 17 00:42:47.271191 env[1335]: time="2025-05-17T00:42:47.271151189Z" level=info msg="StartContainer for \"e09017ce495447117708f5040f36914a701e3b25653d708a4a1a5352377d5738\"" May 17 00:42:47.273733 env[1335]: time="2025-05-17T00:42:47.273048425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-g7f94,Uid:6f889f17-ed15-47c4-875f-05363e204fa3,Namespace:tigera-operator,Attempt:0,}" May 17 00:42:47.312251 env[1335]: time="2025-05-17T00:42:47.312148989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:47.312705 env[1335]: time="2025-05-17T00:42:47.312637253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:47.312973 env[1335]: time="2025-05-17T00:42:47.312911815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:47.313869 env[1335]: time="2025-05-17T00:42:47.313795127Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94b6fb53d395d73a669b31499844c8022fd7de9f14df02cd853a7b88e2f39d14 pid=2351 runtime=io.containerd.runc.v2 May 17 00:42:47.395817 env[1335]: time="2025-05-17T00:42:47.395757488Z" level=info msg="StartContainer for \"e09017ce495447117708f5040f36914a701e3b25653d708a4a1a5352377d5738\" returns successfully" May 17 00:42:47.444409 env[1335]: time="2025-05-17T00:42:47.444254794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-g7f94,Uid:6f889f17-ed15-47c4-875f-05363e204fa3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"94b6fb53d395d73a669b31499844c8022fd7de9f14df02cd853a7b88e2f39d14\"" May 17 00:42:47.450327 env[1335]: time="2025-05-17T00:42:47.448194917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:42:47.612000 audit[2436]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.618412 kernel: kauditd_printk_skb: 4 callbacks suppressed May 17 00:42:47.618611 kernel: audit: type=1325 audit(1747442567.612:218): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.613000 audit[2437]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.651627 kernel: audit: type=1325 audit(1747442567.613:219): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.654357 kernel: audit: type=1300 audit(1747442567.613:219): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffa252260 a2=0 a3=7ffffa25224c items=0 ppid=2367 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.613000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffa252260 a2=0 a3=7ffffa25224c items=0 ppid=2367 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:47.702470 kernel: audit: type=1327 audit(1747442567.613:219): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:47.615000 audit[2438]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.615000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8a074340 a2=0 a3=7ffc8a07432c items=0 ppid=2367 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.752982 kernel: audit: type=1325 audit(1747442567.615:220): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.753159 kernel: audit: type=1300 audit(1747442567.615:220): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8a074340 a2=0 a3=7ffc8a07432c items=0 ppid=2367 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.757479 kernel: audit: type=1327 audit(1747442567.615:220): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:42:47.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:42:47.784810 kernel: audit: type=1325 audit(1747442567.617:221): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.617000 audit[2439]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.617000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff91e34c30 a2=0 a3=7fff91e34c1c items=0 ppid=2367 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.817681 kernel: audit: type=1300 audit(1747442567.617:221): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff91e34c30 a2=0 a3=7fff91e34c1c items=0 ppid=2367 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:42:47.834166 kernel: audit: type=1327 audit(1747442567.617:221): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:42:47.612000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef73cc2f0 a2=0 a3=7ffef73cc2dc items=0 ppid=2367 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:42:47.638000 audit[2440]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.638000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff25a5d980 a2=0 a3=7fff25a5d96c items=0 ppid=2367 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.638000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:42:47.653000 audit[2441]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.653000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff478c06e0 a2=0 a3=7fff478c06cc items=0 ppid=2367 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:42:47.730000 audit[2442]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.730000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcafe9c2c0 a2=0 a3=7ffcafe9c2ac items=0 ppid=2367 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:42:47.741000 audit[2444]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.741000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe4b808df0 a2=0 a3=7ffe4b808ddc items=0 ppid=2367 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 00:42:47.751000 audit[2447]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.751000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff6afe9180 a2=0 a3=7fff6afe916c items=0 ppid=2367 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 00:42:47.752000 audit[2448]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.752000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaedac290 a2=0 a3=7ffeaedac27c items=0 ppid=2367 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:42:47.768000 audit[2450]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.768000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfdbba6c0 a2=0 a3=7ffdfdbba6ac items=0 ppid=2367 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:42:47.768000 audit[2451]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.768000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6ede19e0 a2=0 a3=7ffd6ede19cc items=0 ppid=2367 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:42:47.774000 audit[2453]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.774000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff60ee1d60 a2=0 a3=7fff60ee1d4c items=0 ppid=2367 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:42:47.784000 audit[2456]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.784000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc55a1b1c0 a2=0 a3=7ffc55a1b1ac items=0 ppid=2367 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 00:42:47.784000 audit[2457]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.784000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0d080560 a2=0 a3=7ffd0d08054c items=0 ppid=2367 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:42:47.789000 audit[2459]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.789000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe97894640 a2=0 a3=7ffe9789462c items=0 ppid=2367 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:42:47.794000 audit[2460]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.794000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff55237a70 a2=0 a3=7fff55237a5c items=0 ppid=2367 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:42:47.799000 audit[2462]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.799000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca525bbe0 a2=0 a3=7ffca525bbcc items=0 ppid=2367 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:42:47.809000 audit[2465]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.809000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd88a9bda0 a2=0 a3=7ffd88a9bd8c items=0 ppid=2367 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:42:47.817000 audit[2468]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.817000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2c9b0f10 a2=0 a3=7ffc2c9b0efc items=0 ppid=2367 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:42:47.837000 audit[2469]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.837000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd033311f0 a2=0 a3=7ffd033311dc items=0 ppid=2367 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:42:47.843000 audit[2471]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.843000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff4a735490 a2=0 a3=7fff4a73547c items=0 ppid=2367 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:47.848000 audit[2474]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.848000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe6f0088d0 a2=0 a3=7ffe6f0088bc items=0 ppid=2367 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:47.850000 audit[2475]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.850000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedeec12e0 a2=0 a3=7ffedeec12cc items=0 ppid=2367 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:42:47.854000 audit[2477]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:42:47.854000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe35a82860 a2=0 a3=7ffe35a8284c items=0 ppid=2367 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:42:47.894000 audit[2483]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:47.894000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3ae3bbd0 a2=0 a3=7ffd3ae3bbbc items=0 ppid=2367 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:47.908000 audit[2483]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:47.908000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd3ae3bbd0 a2=0 a3=7ffd3ae3bbbc items=0 ppid=2367 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:47.911000 audit[2488]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.911000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0aa83070 a2=0 a3=7ffd0aa8305c items=0 ppid=2367 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:42:47.917000 audit[2490]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.917000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffed83f18d0 a2=0 a3=7ffed83f18bc items=0 ppid=2367 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 00:42:47.924000 audit[2493]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.924000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeb276c020 a2=0 a3=7ffeb276c00c items=0 ppid=2367 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 00:42:47.926000 audit[2494]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.926000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe97c8b690 a2=0 a3=7ffe97c8b67c items=0 ppid=2367 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:42:47.930000 audit[2496]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.930000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd20d8e5f0 a2=0 a3=7ffd20d8e5dc items=0 ppid=2367 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:42:47.932000 audit[2497]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.932000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff156075a0 a2=0 a3=7fff1560758c items=0 ppid=2367 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:42:47.936000 audit[2499]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.936000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe8a77cf80 a2=0 a3=7ffe8a77cf6c items=0 ppid=2367 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 00:42:47.942000 audit[2502]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.942000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdee5f7570 a2=0 a3=7ffdee5f755c items=0 ppid=2367 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:42:47.944000 audit[2503]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.944000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefe0a52c0 a2=0 a3=7ffefe0a52ac items=0 ppid=2367 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:42:47.949000 audit[2505]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.949000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa46f6000 a2=0 a3=7fffa46f5fec items=0 ppid=2367 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:42:47.951000 audit[2506]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.951000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6b912b70 a2=0 a3=7ffe6b912b5c items=0 ppid=2367 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:42:47.955000 audit[2508]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.955000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcd6f6c810 a2=0 a3=7ffcd6f6c7fc items=0 ppid=2367 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:42:47.966000 audit[2511]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.966000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff30763c20 a2=0 a3=7fff30763c0c items=0 ppid=2367 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:42:47.975000 audit[2514]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.975000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4037a8a0 a2=0 a3=7ffe4037a88c items=0 ppid=2367 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 00:42:47.976000 audit[2515]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.976000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef066de70 a2=0 a3=7ffef066de5c items=0 ppid=2367 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:42:47.980000 audit[2517]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.980000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffede203f0 a2=0 a3=7fffede203dc items=0 ppid=2367 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:47.986000 audit[2520]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.986000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd6b4ede50 a2=0 a3=7ffd6b4ede3c items=0 ppid=2367 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:42:47.989000 audit[2521]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.989000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc291a9bd0 a2=0 a3=7ffc291a9bbc items=0 ppid=2367 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:42:47.993000 audit[2523]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.993000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffef206afd0 a2=0 a3=7ffef206afbc items=0 ppid=2367 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:42:47.995000 audit[2524]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:47.995000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6c603180 a2=0 a3=7ffd6c60316c items=0 ppid=2367 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:47.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:42:48.012000 audit[2526]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:48.012000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff99b228c0 a2=0 a3=7fff99b228ac items=0 ppid=2367 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:48.025000 audit[2529]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:42:48.025000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcfe8e1370 a2=0 a3=7ffcfe8e135c items=0 ppid=2367 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:42:48.031000 audit[2531]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:42:48.031000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffebf828b00 a2=0 a3=7ffebf828aec items=0 ppid=2367 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.031000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:48.032000 audit[2531]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:42:48.032000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffebf828b00 a2=0 a3=7ffebf828aec items=0 ppid=2367 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:48.032000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:48.813942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876678150.mount: Deactivated successfully. May 17 00:42:49.992842 env[1335]: time="2025-05-17T00:42:49.992765023Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:49.995737 env[1335]: time="2025-05-17T00:42:49.995660601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:49.997980 env[1335]: time="2025-05-17T00:42:49.997939922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:50.000172 env[1335]: time="2025-05-17T00:42:50.000128028Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:50.001259 env[1335]: time="2025-05-17T00:42:50.001207349Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:42:50.005230 env[1335]: time="2025-05-17T00:42:50.004677215Z" level=info msg="CreateContainer within sandbox \"94b6fb53d395d73a669b31499844c8022fd7de9f14df02cd853a7b88e2f39d14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:42:50.026216 env[1335]: time="2025-05-17T00:42:50.026139451Z" level=info msg="CreateContainer within sandbox \"94b6fb53d395d73a669b31499844c8022fd7de9f14df02cd853a7b88e2f39d14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2b2165503b3deb51689ee480ae5a683a41401a9a1b10bdf2d2d93017603828cf\"" May 17 00:42:50.027292 env[1335]: time="2025-05-17T00:42:50.027221584Z" level=info msg="StartContainer for \"2b2165503b3deb51689ee480ae5a683a41401a9a1b10bdf2d2d93017603828cf\"" May 17 00:42:50.118509 env[1335]: time="2025-05-17T00:42:50.114638670Z" level=info msg="StartContainer for \"2b2165503b3deb51689ee480ae5a683a41401a9a1b10bdf2d2d93017603828cf\" returns successfully" May 17 00:42:51.109272 kubelet[2220]: I0517 00:42:51.109126 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66hcq" podStartSLOduration=5.109099942 podStartE2EDuration="5.109099942s" podCreationTimestamp="2025-05-17 00:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:48.1358409 +0000 UTC m=+7.420170417" watchObservedRunningTime="2025-05-17 00:42:51.109099942 +0000 UTC m=+10.393429458" May 17 00:42:51.114790 kubelet[2220]: I0517 00:42:51.114713 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-g7f94" podStartSLOduration=2.559396999 podStartE2EDuration="5.114691036s" podCreationTimestamp="2025-05-17 00:42:46 +0000 UTC" firstStartedPulling="2025-05-17 00:42:47.447324587 +0000 UTC m=+6.731654092" lastFinishedPulling="2025-05-17 00:42:50.002618634 +0000 UTC m=+9.286948129" observedRunningTime="2025-05-17 00:42:51.109013854 +0000 UTC m=+10.393343371" watchObservedRunningTime="2025-05-17 00:42:51.114691036 +0000 UTC m=+10.399020543" May 17 00:42:57.009506 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 00:42:57.009693 kernel: audit: type=1325 audit(1747442576.999:269): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:56.999000 audit[2599]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:56.999000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe899d8d50 a2=0 a3=7ffe899d8d3c items=0 ppid=2367 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.063469 kernel: audit: type=1300 audit(1747442576.999:269): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe899d8d50 a2=0 a3=7ffe899d8d3c items=0 ppid=2367 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:56.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.087464 kernel: audit: type=1327 audit(1747442576.999:269): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.028000 audit[2599]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.113465 kernel: audit: type=1325 audit(1747442577.028:270): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.028000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe899d8d50 a2=0 a3=0 items=0 ppid=2367 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.149465 kernel: audit: type=1300 audit(1747442577.028:270): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe899d8d50 a2=0 a3=0 items=0 ppid=2367 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.149620 kernel: audit: type=1327 audit(1747442577.028:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.170000 audit[2601]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.194454 kernel: audit: type=1325 audit(1747442577.170:271): table=filter:91 family=2 entries=15 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.170000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc79e2be0 a2=0 a3=7ffcc79e2bcc items=0 ppid=2367 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.235465 kernel: audit: type=1300 audit(1747442577.170:271): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc79e2be0 a2=0 a3=7ffcc79e2bcc items=0 ppid=2367 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.252468 kernel: audit: type=1327 audit(1747442577.170:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.209000 audit[2601]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.209000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc79e2be0 a2=0 a3=0 items=0 ppid=2367 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:42:57.269672 kernel: audit: type=1325 audit(1747442577.209:272): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:42:57.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:42:57.719000 audit[1595]: USER_END pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:57.719000 audit[1595]: CRED_DISP pid=1595 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:42:57.720997 sudo[1595]: pam_unix(sudo:session): session closed for user root May 17 00:42:57.764606 sshd[1591]: pam_unix(sshd:session): session closed for user core May 17 00:42:57.766000 audit[1591]: USER_END pid=1591 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:57.766000 audit[1591]: CRED_DISP pid=1591 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:42:57.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.56:22-139.178.89.65:58966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:42:57.771224 systemd[1]: sshd@6-10.128.0.56:22-139.178.89.65:58966.service: Deactivated successfully. May 17 00:42:57.773350 systemd-logind[1312]: Session 7 logged out. Waiting for processes to exit. May 17 00:42:57.775012 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:42:57.776944 systemd-logind[1312]: Removed session 7. May 17 00:43:01.832989 kubelet[2220]: W0517 00:43:01.832940 2220 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' and this object May 17 00:43:01.833777 kubelet[2220]: E0517 00:43:01.833739 2220 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' and this object" logger="UnhandledError" May 17 00:43:01.833966 kubelet[2220]: W0517 00:43:01.833389 2220 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' and this object May 17 00:43:01.834105 kubelet[2220]: E0517 00:43:01.834077 2220 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' and this object" logger="UnhandledError" May 17 00:43:01.834244 kubelet[2220]: W0517 00:43:01.833474 2220 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' and this object May 17 00:43:01.834412 kubelet[2220]: E0517 00:43:01.834376 2220 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' and this object" logger="UnhandledError" May 17 00:43:01.847000 audit[2622]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:01.847000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc6b3a0e70 a2=0 a3=7ffc6b3a0e5c items=0 ppid=2367 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:01.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:01.851000 audit[2622]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:01.851000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc6b3a0e70 a2=0 a3=0 items=0 ppid=2367 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:01.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:01.887000 audit[2624]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:01.887000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc7445aea0 a2=0 a3=7ffc7445ae8c items=0 ppid=2367 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:01.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:01.889827 kubelet[2220]: I0517 00:43:01.889744 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3b5e8e88-776a-4063-b404-8e336eea4cd1-typha-certs\") pod \"calico-typha-b5d8b7ddd-7fjrt\" (UID: \"3b5e8e88-776a-4063-b404-8e336eea4cd1\") " pod="calico-system/calico-typha-b5d8b7ddd-7fjrt" May 17 00:43:01.889987 kubelet[2220]: I0517 00:43:01.889823 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v587z\" (UniqueName: \"kubernetes.io/projected/3b5e8e88-776a-4063-b404-8e336eea4cd1-kube-api-access-v587z\") pod \"calico-typha-b5d8b7ddd-7fjrt\" (UID: \"3b5e8e88-776a-4063-b404-8e336eea4cd1\") " pod="calico-system/calico-typha-b5d8b7ddd-7fjrt" May 17 00:43:01.889987 kubelet[2220]: I0517 00:43:01.889873 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b5e8e88-776a-4063-b404-8e336eea4cd1-tigera-ca-bundle\") pod \"calico-typha-b5d8b7ddd-7fjrt\" (UID: \"3b5e8e88-776a-4063-b404-8e336eea4cd1\") " pod="calico-system/calico-typha-b5d8b7ddd-7fjrt" May 17 00:43:01.891000 audit[2624]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:01.891000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7445aea0 a2=0 a3=0 items=0 ppid=2367 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:01.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:02.091658 kubelet[2220]: I0517 00:43:02.091468 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-cni-net-dir\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.091658 kubelet[2220]: I0517 00:43:02.091544 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-flexvol-driver-host\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.091658 kubelet[2220]: I0517 00:43:02.091618 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-var-run-calico\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.091658 kubelet[2220]: I0517 00:43:02.091654 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-var-lib-calico\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092014 kubelet[2220]: I0517 00:43:02.091682 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-xtables-lock\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092014 kubelet[2220]: I0517 00:43:02.091708 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-policysync\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092014 kubelet[2220]: I0517 00:43:02.091739 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a136356-4794-404d-b254-47d4350a628a-tigera-ca-bundle\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092014 kubelet[2220]: I0517 00:43:02.091776 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-cni-bin-dir\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092014 kubelet[2220]: I0517 00:43:02.091806 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-lib-modules\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092287 kubelet[2220]: I0517 00:43:02.091840 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4fm\" (UniqueName: \"kubernetes.io/projected/9a136356-4794-404d-b254-47d4350a628a-kube-api-access-6q4fm\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092287 kubelet[2220]: I0517 00:43:02.091872 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9a136356-4794-404d-b254-47d4350a628a-cni-log-dir\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.092287 kubelet[2220]: I0517 00:43:02.091903 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9a136356-4794-404d-b254-47d4350a628a-node-certs\") pod \"calico-node-n2l6j\" (UID: \"9a136356-4794-404d-b254-47d4350a628a\") " pod="calico-system/calico-node-n2l6j" May 17 00:43:02.194642 kubelet[2220]: E0517 00:43:02.194601 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.194885 kubelet[2220]: W0517 00:43:02.194855 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.195061 kubelet[2220]: E0517 00:43:02.195027 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.199856 kubelet[2220]: E0517 00:43:02.199815 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.200058 kubelet[2220]: W0517 00:43:02.200029 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.200214 kubelet[2220]: E0517 00:43:02.200185 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.226103 kubelet[2220]: E0517 00:43:02.226067 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.226378 kubelet[2220]: W0517 00:43:02.226349 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.227154 kubelet[2220]: E0517 00:43:02.227119 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.228679 kubelet[2220]: E0517 00:43:02.228652 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.228837 kubelet[2220]: W0517 00:43:02.228816 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.228999 kubelet[2220]: E0517 00:43:02.228974 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.229602 kubelet[2220]: E0517 00:43:02.229583 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.229773 kubelet[2220]: W0517 00:43:02.229751 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.229937 kubelet[2220]: E0517 00:43:02.229912 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.234776 kubelet[2220]: E0517 00:43:02.234746 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.234976 kubelet[2220]: W0517 00:43:02.234952 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.235188 kubelet[2220]: E0517 00:43:02.235144 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.235647 kubelet[2220]: E0517 00:43:02.235615 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.235647 kubelet[2220]: W0517 00:43:02.235648 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.235869 kubelet[2220]: E0517 00:43:02.235681 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.236083 kubelet[2220]: E0517 00:43:02.236059 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.236193 kubelet[2220]: W0517 00:43:02.236084 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.236286 kubelet[2220]: E0517 00:43:02.236222 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.236496 kubelet[2220]: E0517 00:43:02.236474 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.236617 kubelet[2220]: W0517 00:43:02.236497 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.236721 kubelet[2220]: E0517 00:43:02.236639 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.236900 kubelet[2220]: E0517 00:43:02.236864 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.236900 kubelet[2220]: W0517 00:43:02.236888 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.237056 kubelet[2220]: E0517 00:43:02.237017 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.237261 kubelet[2220]: E0517 00:43:02.237236 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.237372 kubelet[2220]: W0517 00:43:02.237262 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.237372 kubelet[2220]: E0517 00:43:02.237288 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.237790 kubelet[2220]: E0517 00:43:02.237763 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.237790 kubelet[2220]: W0517 00:43:02.237790 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.237949 kubelet[2220]: E0517 00:43:02.237817 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.238217 kubelet[2220]: E0517 00:43:02.238171 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.238315 kubelet[2220]: W0517 00:43:02.238217 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.238315 kubelet[2220]: E0517 00:43:02.238244 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.238673 kubelet[2220]: E0517 00:43:02.238648 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.238790 kubelet[2220]: W0517 00:43:02.238676 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.238855 kubelet[2220]: E0517 00:43:02.238809 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.239064 kubelet[2220]: E0517 00:43:02.239039 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.239161 kubelet[2220]: W0517 00:43:02.239065 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.239161 kubelet[2220]: E0517 00:43:02.239090 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.240529 kubelet[2220]: E0517 00:43:02.239537 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.240529 kubelet[2220]: W0517 00:43:02.239556 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.240529 kubelet[2220]: E0517 00:43:02.239582 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.240529 kubelet[2220]: E0517 00:43:02.239957 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.240529 kubelet[2220]: W0517 00:43:02.239971 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.240529 kubelet[2220]: E0517 00:43:02.239987 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.294010 kubelet[2220]: E0517 00:43:02.293967 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.294010 kubelet[2220]: W0517 00:43:02.294007 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.294310 kubelet[2220]: E0517 00:43:02.294042 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.294506 kubelet[2220]: E0517 00:43:02.294480 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.294506 kubelet[2220]: W0517 00:43:02.294506 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.294725 kubelet[2220]: E0517 00:43:02.294527 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.294944 kubelet[2220]: E0517 00:43:02.294908 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.294944 kubelet[2220]: W0517 00:43:02.294931 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.295108 kubelet[2220]: E0517 00:43:02.294956 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.295445 kubelet[2220]: E0517 00:43:02.295398 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.295445 kubelet[2220]: W0517 00:43:02.295444 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.295635 kubelet[2220]: E0517 00:43:02.295467 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.295924 kubelet[2220]: E0517 00:43:02.295901 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.296011 kubelet[2220]: W0517 00:43:02.295924 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.296011 kubelet[2220]: E0517 00:43:02.295944 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.380547 kubelet[2220]: E0517 00:43:02.380364 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:02.384013 kubelet[2220]: E0517 00:43:02.383977 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.384218 kubelet[2220]: W0517 00:43:02.384192 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.384357 kubelet[2220]: E0517 00:43:02.384332 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.384931 kubelet[2220]: E0517 00:43:02.384905 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.385118 kubelet[2220]: W0517 00:43:02.385093 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.385256 kubelet[2220]: E0517 00:43:02.385233 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.385757 kubelet[2220]: E0517 00:43:02.385735 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.385899 kubelet[2220]: W0517 00:43:02.385876 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.386020 kubelet[2220]: E0517 00:43:02.385998 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.386507 kubelet[2220]: E0517 00:43:02.386475 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.386658 kubelet[2220]: W0517 00:43:02.386636 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.386777 kubelet[2220]: E0517 00:43:02.386756 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.387228 kubelet[2220]: E0517 00:43:02.387207 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.392001 kubelet[2220]: W0517 00:43:02.391964 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.392180 kubelet[2220]: E0517 00:43:02.392155 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.392711 kubelet[2220]: E0517 00:43:02.392682 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.392711 kubelet[2220]: W0517 00:43:02.392707 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.392896 kubelet[2220]: E0517 00:43:02.392732 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.393238 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.395533 kubelet[2220]: W0517 00:43:02.393259 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.393281 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.393651 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.395533 kubelet[2220]: W0517 00:43:02.393703 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.393726 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.394268 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.395533 kubelet[2220]: W0517 00:43:02.394283 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.394302 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.395533 kubelet[2220]: E0517 00:43:02.394627 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.396257 kubelet[2220]: W0517 00:43:02.394642 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.396257 kubelet[2220]: E0517 00:43:02.394660 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.396257 kubelet[2220]: E0517 00:43:02.395619 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.396257 kubelet[2220]: W0517 00:43:02.395637 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.396257 kubelet[2220]: E0517 00:43:02.395655 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.396257 kubelet[2220]: E0517 00:43:02.395962 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.396257 kubelet[2220]: W0517 00:43:02.395976 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.396257 kubelet[2220]: E0517 00:43:02.395992 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.396854 kubelet[2220]: E0517 00:43:02.396284 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.396854 kubelet[2220]: W0517 00:43:02.396297 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.396854 kubelet[2220]: E0517 00:43:02.396313 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.396854 kubelet[2220]: E0517 00:43:02.396614 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.396854 kubelet[2220]: W0517 00:43:02.396628 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.396854 kubelet[2220]: E0517 00:43:02.396643 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.397249 kubelet[2220]: E0517 00:43:02.396923 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.397249 kubelet[2220]: W0517 00:43:02.396938 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.397249 kubelet[2220]: E0517 00:43:02.396956 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.397249 kubelet[2220]: E0517 00:43:02.397220 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.397249 kubelet[2220]: W0517 00:43:02.397233 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.397249 kubelet[2220]: E0517 00:43:02.397246 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.397683 kubelet[2220]: E0517 00:43:02.397603 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.397683 kubelet[2220]: W0517 00:43:02.397617 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.397683 kubelet[2220]: E0517 00:43:02.397634 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.397949 kubelet[2220]: E0517 00:43:02.397925 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.398053 kubelet[2220]: W0517 00:43:02.397950 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.398053 kubelet[2220]: E0517 00:43:02.397970 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.398311 kubelet[2220]: E0517 00:43:02.398287 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.398401 kubelet[2220]: W0517 00:43:02.398313 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.398401 kubelet[2220]: E0517 00:43:02.398332 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.402491 kubelet[2220]: E0517 00:43:02.400708 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.402491 kubelet[2220]: W0517 00:43:02.400728 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.402491 kubelet[2220]: E0517 00:43:02.400746 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.402917 kubelet[2220]: E0517 00:43:02.402847 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.403017 kubelet[2220]: W0517 00:43:02.402917 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.403017 kubelet[2220]: E0517 00:43:02.402939 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.404462 kubelet[2220]: E0517 00:43:02.403282 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.404462 kubelet[2220]: W0517 00:43:02.403301 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.404462 kubelet[2220]: E0517 00:43:02.403319 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.404462 kubelet[2220]: E0517 00:43:02.403865 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.404462 kubelet[2220]: W0517 00:43:02.403882 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.404462 kubelet[2220]: E0517 00:43:02.403917 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.404462 kubelet[2220]: I0517 00:43:02.403961 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/603f05d6-04eb-4ce3-baf0-5f232fe52221-registration-dir\") pod \"csi-node-driver-64ww9\" (UID: \"603f05d6-04eb-4ce3-baf0-5f232fe52221\") " pod="calico-system/csi-node-driver-64ww9" May 17 00:43:02.404462 kubelet[2220]: E0517 00:43:02.404347 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.404462 kubelet[2220]: W0517 00:43:02.404364 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.405200 kubelet[2220]: E0517 00:43:02.404389 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.405200 kubelet[2220]: E0517 00:43:02.404789 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.405200 kubelet[2220]: W0517 00:43:02.404804 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.405200 kubelet[2220]: E0517 00:43:02.404829 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.405200 kubelet[2220]: E0517 00:43:02.405187 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.405200 kubelet[2220]: W0517 00:43:02.405201 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.405648 kubelet[2220]: E0517 00:43:02.405224 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.405648 kubelet[2220]: E0517 00:43:02.405608 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.405648 kubelet[2220]: W0517 00:43:02.405623 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.405856 kubelet[2220]: E0517 00:43:02.405746 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.405856 kubelet[2220]: I0517 00:43:02.405784 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/603f05d6-04eb-4ce3-baf0-5f232fe52221-kubelet-dir\") pod \"csi-node-driver-64ww9\" (UID: \"603f05d6-04eb-4ce3-baf0-5f232fe52221\") " pod="calico-system/csi-node-driver-64ww9" May 17 00:43:02.406038 kubelet[2220]: E0517 00:43:02.406013 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.406038 kubelet[2220]: W0517 00:43:02.406026 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.406172 kubelet[2220]: E0517 00:43:02.406048 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.410476 kubelet[2220]: E0517 00:43:02.406383 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.410476 kubelet[2220]: W0517 00:43:02.406400 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.410476 kubelet[2220]: E0517 00:43:02.406453 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.410476 kubelet[2220]: E0517 00:43:02.406775 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.410476 kubelet[2220]: W0517 00:43:02.406789 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.410476 kubelet[2220]: E0517 00:43:02.406811 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.410476 kubelet[2220]: I0517 00:43:02.406842 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/603f05d6-04eb-4ce3-baf0-5f232fe52221-varrun\") pod \"csi-node-driver-64ww9\" (UID: \"603f05d6-04eb-4ce3-baf0-5f232fe52221\") " pod="calico-system/csi-node-driver-64ww9" May 17 00:43:02.410476 kubelet[2220]: E0517 00:43:02.407176 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.410476 kubelet[2220]: W0517 00:43:02.407190 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.410988 kubelet[2220]: E0517 00:43:02.407300 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.410988 kubelet[2220]: I0517 00:43:02.407332 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnkj\" (UniqueName: \"kubernetes.io/projected/603f05d6-04eb-4ce3-baf0-5f232fe52221-kube-api-access-kfnkj\") pod \"csi-node-driver-64ww9\" (UID: \"603f05d6-04eb-4ce3-baf0-5f232fe52221\") " pod="calico-system/csi-node-driver-64ww9" May 17 00:43:02.410988 kubelet[2220]: E0517 00:43:02.407652 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.410988 kubelet[2220]: W0517 00:43:02.407665 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.410988 kubelet[2220]: E0517 00:43:02.407788 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.410988 kubelet[2220]: E0517 00:43:02.407986 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.410988 kubelet[2220]: W0517 00:43:02.407997 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.410988 kubelet[2220]: E0517 00:43:02.408016 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.410988 kubelet[2220]: E0517 00:43:02.408278 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.411312 kubelet[2220]: W0517 00:43:02.408289 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.411312 kubelet[2220]: E0517 00:43:02.408307 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.411312 kubelet[2220]: I0517 00:43:02.408335 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/603f05d6-04eb-4ce3-baf0-5f232fe52221-socket-dir\") pod \"csi-node-driver-64ww9\" (UID: \"603f05d6-04eb-4ce3-baf0-5f232fe52221\") " pod="calico-system/csi-node-driver-64ww9" May 17 00:43:02.411312 kubelet[2220]: E0517 00:43:02.408709 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.411312 kubelet[2220]: W0517 00:43:02.408725 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.411312 kubelet[2220]: E0517 00:43:02.408749 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.411312 kubelet[2220]: E0517 00:43:02.409086 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.411312 kubelet[2220]: W0517 00:43:02.409100 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.411312 kubelet[2220]: E0517 00:43:02.409118 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.409558 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.412654 kubelet[2220]: W0517 00:43:02.409576 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.409599 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.409922 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.412654 kubelet[2220]: W0517 00:43:02.409936 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.409954 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.410267 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.412654 kubelet[2220]: W0517 00:43:02.410282 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.410298 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.412654 kubelet[2220]: E0517 00:43:02.410660 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.413272 kubelet[2220]: W0517 00:43:02.410676 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.413272 kubelet[2220]: E0517 00:43:02.410694 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.510741 kubelet[2220]: E0517 00:43:02.510692 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.510741 kubelet[2220]: W0517 00:43:02.510727 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.511048 kubelet[2220]: E0517 00:43:02.510762 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.511367 kubelet[2220]: E0517 00:43:02.511339 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.511367 kubelet[2220]: W0517 00:43:02.511361 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.511642 kubelet[2220]: E0517 00:43:02.511409 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.511975 kubelet[2220]: E0517 00:43:02.511951 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.511975 kubelet[2220]: W0517 00:43:02.511974 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.512180 kubelet[2220]: E0517 00:43:02.512003 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.512507 kubelet[2220]: E0517 00:43:02.512483 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.512507 kubelet[2220]: W0517 00:43:02.512509 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.512736 kubelet[2220]: E0517 00:43:02.512537 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.513130 kubelet[2220]: E0517 00:43:02.513108 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.513312 kubelet[2220]: W0517 00:43:02.513280 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.513624 kubelet[2220]: E0517 00:43:02.513592 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.514153 kubelet[2220]: E0517 00:43:02.514132 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.514382 kubelet[2220]: W0517 00:43:02.514358 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.514537 kubelet[2220]: E0517 00:43:02.514516 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.515089 kubelet[2220]: E0517 00:43:02.515068 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.515239 kubelet[2220]: W0517 00:43:02.515207 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.515391 kubelet[2220]: E0517 00:43:02.515366 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.515941 kubelet[2220]: E0517 00:43:02.515919 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.516112 kubelet[2220]: W0517 00:43:02.516088 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.516240 kubelet[2220]: E0517 00:43:02.516211 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.516711 kubelet[2220]: E0517 00:43:02.516686 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.516711 kubelet[2220]: W0517 00:43:02.516710 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.516899 kubelet[2220]: E0517 00:43:02.516735 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.517214 kubelet[2220]: E0517 00:43:02.517190 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.517340 kubelet[2220]: W0517 00:43:02.517215 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.517340 kubelet[2220]: E0517 00:43:02.517247 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.517736 kubelet[2220]: E0517 00:43:02.517711 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.517848 kubelet[2220]: W0517 00:43:02.517736 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.517958 kubelet[2220]: E0517 00:43:02.517879 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.518284 kubelet[2220]: E0517 00:43:02.518248 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.518284 kubelet[2220]: W0517 00:43:02.518271 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.518510 kubelet[2220]: E0517 00:43:02.518301 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.518746 kubelet[2220]: E0517 00:43:02.518722 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.518868 kubelet[2220]: W0517 00:43:02.518754 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.518868 kubelet[2220]: E0517 00:43:02.518782 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.519164 kubelet[2220]: E0517 00:43:02.519141 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.519288 kubelet[2220]: W0517 00:43:02.519165 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.519365 kubelet[2220]: E0517 00:43:02.519296 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.520489 kubelet[2220]: E0517 00:43:02.519571 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.520489 kubelet[2220]: W0517 00:43:02.519589 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.520489 kubelet[2220]: E0517 00:43:02.519614 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.520737 kubelet[2220]: E0517 00:43:02.520575 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.520737 kubelet[2220]: W0517 00:43:02.520599 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.520737 kubelet[2220]: E0517 00:43:02.520625 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.521676 kubelet[2220]: E0517 00:43:02.521070 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.521676 kubelet[2220]: W0517 00:43:02.521090 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.521676 kubelet[2220]: E0517 00:43:02.521268 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.521676 kubelet[2220]: E0517 00:43:02.521557 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.521676 kubelet[2220]: W0517 00:43:02.521574 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.522064 kubelet[2220]: E0517 00:43:02.521708 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.522064 kubelet[2220]: E0517 00:43:02.522041 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.522064 kubelet[2220]: W0517 00:43:02.522055 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.522271 kubelet[2220]: E0517 00:43:02.522185 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.523460 kubelet[2220]: E0517 00:43:02.522412 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.523460 kubelet[2220]: W0517 00:43:02.522459 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.523460 kubelet[2220]: E0517 00:43:02.522565 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.523460 kubelet[2220]: E0517 00:43:02.523143 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.523460 kubelet[2220]: W0517 00:43:02.523160 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.523460 kubelet[2220]: E0517 00:43:02.523192 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.523922 kubelet[2220]: E0517 00:43:02.523592 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.523922 kubelet[2220]: W0517 00:43:02.523607 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.523922 kubelet[2220]: E0517 00:43:02.523748 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.524134 kubelet[2220]: E0517 00:43:02.523958 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.524134 kubelet[2220]: W0517 00:43:02.523972 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.524134 kubelet[2220]: E0517 00:43:02.524086 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.524357 kubelet[2220]: E0517 00:43:02.524303 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.524357 kubelet[2220]: W0517 00:43:02.524324 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.524357 kubelet[2220]: E0517 00:43:02.524351 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.525455 kubelet[2220]: E0517 00:43:02.524749 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.525455 kubelet[2220]: W0517 00:43:02.524768 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.525455 kubelet[2220]: E0517 00:43:02.524791 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.525455 kubelet[2220]: E0517 00:43:02.525339 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.526615 kubelet[2220]: W0517 00:43:02.525356 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.528343 kubelet[2220]: E0517 00:43:02.528297 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.528986 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.530609 kubelet[2220]: W0517 00:43:02.529007 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.529153 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.529448 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.530609 kubelet[2220]: W0517 00:43:02.529462 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.529484 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.529928 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.530609 kubelet[2220]: W0517 00:43:02.529946 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.529972 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.530609 kubelet[2220]: E0517 00:43:02.530333 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.531298 kubelet[2220]: W0517 00:43:02.530348 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.531298 kubelet[2220]: E0517 00:43:02.530363 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.622412 kubelet[2220]: E0517 00:43:02.622368 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.622412 kubelet[2220]: W0517 00:43:02.622406 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.622727 kubelet[2220]: E0517 00:43:02.622456 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.622878 kubelet[2220]: E0517 00:43:02.622836 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.623024 kubelet[2220]: W0517 00:43:02.622878 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.623024 kubelet[2220]: E0517 00:43:02.622899 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.623286 kubelet[2220]: E0517 00:43:02.623253 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.623286 kubelet[2220]: W0517 00:43:02.623275 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.623491 kubelet[2220]: E0517 00:43:02.623298 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.623662 kubelet[2220]: E0517 00:43:02.623636 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.623791 kubelet[2220]: W0517 00:43:02.623661 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.623791 kubelet[2220]: E0517 00:43:02.623681 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.625158 kubelet[2220]: E0517 00:43:02.625117 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.625158 kubelet[2220]: W0517 00:43:02.625142 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.625370 kubelet[2220]: E0517 00:43:02.625163 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.625747 kubelet[2220]: E0517 00:43:02.625709 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.625747 kubelet[2220]: W0517 00:43:02.625735 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.625922 kubelet[2220]: E0517 00:43:02.625756 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.727292 kubelet[2220]: E0517 00:43:02.727166 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.727579 kubelet[2220]: W0517 00:43:02.727546 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.727748 kubelet[2220]: E0517 00:43:02.727721 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.728261 kubelet[2220]: E0517 00:43:02.728237 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.728459 kubelet[2220]: W0517 00:43:02.728418 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.732543 kubelet[2220]: E0517 00:43:02.732513 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.733055 kubelet[2220]: E0517 00:43:02.733033 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.733193 kubelet[2220]: W0517 00:43:02.733170 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.733307 kubelet[2220]: E0517 00:43:02.733287 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.733846 kubelet[2220]: E0517 00:43:02.733827 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.734010 kubelet[2220]: W0517 00:43:02.733988 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.734125 kubelet[2220]: E0517 00:43:02.734103 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.736632 kubelet[2220]: E0517 00:43:02.736602 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.736768 kubelet[2220]: W0517 00:43:02.736747 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.736883 kubelet[2220]: E0517 00:43:02.736862 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.737307 kubelet[2220]: E0517 00:43:02.737287 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.737508 kubelet[2220]: W0517 00:43:02.737486 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.737644 kubelet[2220]: E0517 00:43:02.737622 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.839280 kubelet[2220]: E0517 00:43:02.839226 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.839280 kubelet[2220]: W0517 00:43:02.839263 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.840120 kubelet[2220]: E0517 00:43:02.839310 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.840120 kubelet[2220]: E0517 00:43:02.839740 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.840120 kubelet[2220]: W0517 00:43:02.839757 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.840120 kubelet[2220]: E0517 00:43:02.839777 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.840465 kubelet[2220]: E0517 00:43:02.840158 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.840465 kubelet[2220]: W0517 00:43:02.840175 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.840465 kubelet[2220]: E0517 00:43:02.840193 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.840692 kubelet[2220]: E0517 00:43:02.840577 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.840692 kubelet[2220]: W0517 00:43:02.840592 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.840692 kubelet[2220]: E0517 00:43:02.840609 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.840963 kubelet[2220]: E0517 00:43:02.840935 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.841048 kubelet[2220]: W0517 00:43:02.840965 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.841048 kubelet[2220]: E0517 00:43:02.840986 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.841409 kubelet[2220]: E0517 00:43:02.841381 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.841409 kubelet[2220]: W0517 00:43:02.841407 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.841628 kubelet[2220]: E0517 00:43:02.841456 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.888126 kubelet[2220]: E0517 00:43:02.888088 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.888415 kubelet[2220]: W0517 00:43:02.888379 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.888628 kubelet[2220]: E0517 00:43:02.888600 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.911096 kubelet[2220]: E0517 00:43:02.911062 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.911359 kubelet[2220]: W0517 00:43:02.911315 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.915025 kubelet[2220]: E0517 00:43:02.914996 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.917478 kubelet[2220]: E0517 00:43:02.917450 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.917647 kubelet[2220]: W0517 00:43:02.917620 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.917776 kubelet[2220]: E0517 00:43:02.917753 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.934000 audit[2753]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:02.942532 kubelet[2220]: E0517 00:43:02.942496 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.942777 kubelet[2220]: W0517 00:43:02.942749 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.942957 kubelet[2220]: E0517 00:43:02.942933 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.943567 kubelet[2220]: E0517 00:43:02.943544 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.943733 kubelet[2220]: W0517 00:43:02.943710 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.943987 kubelet[2220]: E0517 00:43:02.943962 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.944644 kubelet[2220]: E0517 00:43:02.944624 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.944802 kubelet[2220]: W0517 00:43:02.944779 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.944973 kubelet[2220]: E0517 00:43:02.944952 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.945259 kernel: kauditd_printk_skb: 19 callbacks suppressed May 17 00:43:02.945368 kernel: audit: type=1325 audit(1747442582.934:282): table=filter:97 family=2 entries=21 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:02.945830 kubelet[2220]: E0517 00:43:02.945810 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.945983 kubelet[2220]: W0517 00:43:02.945961 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.946142 kubelet[2220]: E0517 00:43:02.946121 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.946714 kubelet[2220]: E0517 00:43:02.946694 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.946865 kubelet[2220]: W0517 00:43:02.946842 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.947025 kubelet[2220]: E0517 00:43:02.947003 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.947637 kubelet[2220]: E0517 00:43:02.947617 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.947790 kubelet[2220]: W0517 00:43:02.947768 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.947944 kubelet[2220]: E0517 00:43:02.947922 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.948497 kubelet[2220]: E0517 00:43:02.948478 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.948669 kubelet[2220]: W0517 00:43:02.948648 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.948842 kubelet[2220]: E0517 00:43:02.948818 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.949453 kubelet[2220]: E0517 00:43:02.949417 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.949625 kubelet[2220]: W0517 00:43:02.949602 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.949751 kubelet[2220]: E0517 00:43:02.949730 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.950555 kubelet[2220]: E0517 00:43:02.950535 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:02.950704 kubelet[2220]: W0517 00:43:02.950682 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:02.950832 kubelet[2220]: E0517 00:43:02.950811 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:02.934000 audit[2753]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe4d3a99d0 a2=0 a3=7ffe4d3a99bc items=0 ppid=2367 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:02.991785 kubelet[2220]: E0517 00:43:02.991634 2220 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 17 00:43:02.992164 kubelet[2220]: E0517 00:43:02.992138 2220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b5e8e88-776a-4063-b404-8e336eea4cd1-tigera-ca-bundle podName:3b5e8e88-776a-4063-b404-8e336eea4cd1 nodeName:}" failed. No retries permitted until 2025-05-17 00:43:03.492098067 +0000 UTC m=+22.776427567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/3b5e8e88-776a-4063-b404-8e336eea4cd1-tigera-ca-bundle") pod "calico-typha-b5d8b7ddd-7fjrt" (UID: "3b5e8e88-776a-4063-b404-8e336eea4cd1") : failed to sync configmap cache: timed out waiting for the condition May 17 00:43:02.996480 kernel: audit: type=1300 audit(1747442582.934:282): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe4d3a99d0 a2=0 a3=7ffe4d3a99bc items=0 ppid=2367 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:02.934000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:03.014507 kernel: audit: type=1327 audit(1747442582.934:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:03.014000 audit[2753]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:03.014000 audit[2753]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4d3a99d0 a2=0 a3=0 items=0 ppid=2367 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:03.064183 kernel: audit: type=1325 audit(1747442583.014:283): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:03.064346 kernel: audit: type=1300 audit(1747442583.014:283): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4d3a99d0 a2=0 a3=0 items=0 ppid=2367 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:03.065148 kubelet[2220]: E0517 00:43:03.065110 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.065338 kubelet[2220]: W0517 00:43:03.065149 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.065338 kubelet[2220]: E0517 00:43:03.065183 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.065666 kubelet[2220]: E0517 00:43:03.065639 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.065777 kubelet[2220]: W0517 00:43:03.065667 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.065777 kubelet[2220]: E0517 00:43:03.065691 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:03.081472 kernel: audit: type=1327 audit(1747442583.014:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:03.166622 kubelet[2220]: E0517 00:43:03.166568 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.166863 kubelet[2220]: W0517 00:43:03.166832 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.167046 kubelet[2220]: E0517 00:43:03.166998 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.167494 kubelet[2220]: E0517 00:43:03.167467 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.167494 kubelet[2220]: W0517 00:43:03.167493 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.167688 kubelet[2220]: E0517 00:43:03.167515 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.240235 kubelet[2220]: E0517 00:43:03.240178 2220 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition May 17 00:43:03.240466 kubelet[2220]: E0517 00:43:03.240301 2220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a136356-4794-404d-b254-47d4350a628a-tigera-ca-bundle podName:9a136356-4794-404d-b254-47d4350a628a nodeName:}" failed. No retries permitted until 2025-05-17 00:43:03.740274303 +0000 UTC m=+23.024603807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/9a136356-4794-404d-b254-47d4350a628a-tigera-ca-bundle") pod "calico-node-n2l6j" (UID: "9a136356-4794-404d-b254-47d4350a628a") : failed to sync configmap cache: timed out waiting for the condition May 17 00:43:03.269233 kubelet[2220]: E0517 00:43:03.269096 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.269233 kubelet[2220]: W0517 00:43:03.269157 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.269233 kubelet[2220]: E0517 00:43:03.269191 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.270954 kubelet[2220]: E0517 00:43:03.269645 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.270954 kubelet[2220]: W0517 00:43:03.269669 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.270954 kubelet[2220]: E0517 00:43:03.269687 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.371512 kubelet[2220]: E0517 00:43:03.371460 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.371512 kubelet[2220]: W0517 00:43:03.371506 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.371836 kubelet[2220]: E0517 00:43:03.371543 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.371950 kubelet[2220]: E0517 00:43:03.371929 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.371950 kubelet[2220]: W0517 00:43:03.371945 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.372075 kubelet[2220]: E0517 00:43:03.371967 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.473479 kubelet[2220]: E0517 00:43:03.473414 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.473479 kubelet[2220]: W0517 00:43:03.473475 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.473792 kubelet[2220]: E0517 00:43:03.473511 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.473972 kubelet[2220]: E0517 00:43:03.473944 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.474077 kubelet[2220]: W0517 00:43:03.473972 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.474077 kubelet[2220]: E0517 00:43:03.474006 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.574838 kubelet[2220]: E0517 00:43:03.574786 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.574838 kubelet[2220]: W0517 00:43:03.574821 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.575142 kubelet[2220]: E0517 00:43:03.574857 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.575283 kubelet[2220]: E0517 00:43:03.575255 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.575283 kubelet[2220]: W0517 00:43:03.575278 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.575406 kubelet[2220]: E0517 00:43:03.575334 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.575782 kubelet[2220]: E0517 00:43:03.575757 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.575891 kubelet[2220]: W0517 00:43:03.575794 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.575891 kubelet[2220]: E0517 00:43:03.575818 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.576195 kubelet[2220]: E0517 00:43:03.576170 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.576195 kubelet[2220]: W0517 00:43:03.576192 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.576408 kubelet[2220]: E0517 00:43:03.576212 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.576600 kubelet[2220]: E0517 00:43:03.576559 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.576600 kubelet[2220]: W0517 00:43:03.576580 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.576746 kubelet[2220]: E0517 00:43:03.576602 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.577005 kubelet[2220]: E0517 00:43:03.576982 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.577089 kubelet[2220]: W0517 00:43:03.577004 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.577089 kubelet[2220]: E0517 00:43:03.577024 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.578631 kubelet[2220]: E0517 00:43:03.578605 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.578631 kubelet[2220]: W0517 00:43:03.578629 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.578776 kubelet[2220]: E0517 00:43:03.578651 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.634892 env[1335]: time="2025-05-17T00:43:03.634804800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b5d8b7ddd-7fjrt,Uid:3b5e8e88-776a-4063-b404-8e336eea4cd1,Namespace:calico-system,Attempt:0,}" May 17 00:43:03.667263 env[1335]: time="2025-05-17T00:43:03.667163157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:03.667595 env[1335]: time="2025-05-17T00:43:03.667542950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:03.667795 env[1335]: time="2025-05-17T00:43:03.667726737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:03.668118 env[1335]: time="2025-05-17T00:43:03.668047444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7cea47d5621a316b8069c7a87d2438dd421686e7661e33114e12f3babc9e777e pid=2790 runtime=io.containerd.runc.v2 May 17 00:43:03.678861 kubelet[2220]: E0517 00:43:03.678822 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.678861 kubelet[2220]: W0517 00:43:03.678859 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.679111 kubelet[2220]: E0517 00:43:03.678894 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.782128 env[1335]: time="2025-05-17T00:43:03.781346478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b5d8b7ddd-7fjrt,Uid:3b5e8e88-776a-4063-b404-8e336eea4cd1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7cea47d5621a316b8069c7a87d2438dd421686e7661e33114e12f3babc9e777e\"" May 17 00:43:03.783017 kubelet[2220]: E0517 00:43:03.782974 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.783017 kubelet[2220]: W0517 00:43:03.783001 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.783248 kubelet[2220]: E0517 00:43:03.783035 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.783563 kubelet[2220]: E0517 00:43:03.783521 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.783563 kubelet[2220]: W0517 00:43:03.783542 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.783772 kubelet[2220]: E0517 00:43:03.783569 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.783901 kubelet[2220]: E0517 00:43:03.783877 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.784018 kubelet[2220]: W0517 00:43:03.783902 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.784018 kubelet[2220]: E0517 00:43:03.783922 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.784237 kubelet[2220]: E0517 00:43:03.784211 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.784340 kubelet[2220]: W0517 00:43:03.784238 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.784340 kubelet[2220]: E0517 00:43:03.784257 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.784672 kubelet[2220]: E0517 00:43:03.784651 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.784672 kubelet[2220]: W0517 00:43:03.784672 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.784857 kubelet[2220]: E0517 00:43:03.784694 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.786463 kubelet[2220]: E0517 00:43:03.786418 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:03.786587 kubelet[2220]: W0517 00:43:03.786463 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:03.786587 kubelet[2220]: E0517 00:43:03.786485 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:03.788497 env[1335]: time="2025-05-17T00:43:03.787523260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:43:03.869459 env[1335]: time="2025-05-17T00:43:03.869279330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2l6j,Uid:9a136356-4794-404d-b254-47d4350a628a,Namespace:calico-system,Attempt:0,}" May 17 00:43:03.904402 env[1335]: time="2025-05-17T00:43:03.904301100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:03.904707 env[1335]: time="2025-05-17T00:43:03.904362388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:03.904707 env[1335]: time="2025-05-17T00:43:03.904403451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:03.904955 env[1335]: time="2025-05-17T00:43:03.904879991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8 pid=2836 runtime=io.containerd.runc.v2 May 17 00:43:03.931845 kubelet[2220]: E0517 00:43:03.931763 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:03.984538 env[1335]: time="2025-05-17T00:43:03.984371605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n2l6j,Uid:9a136356-4794-404d-b254-47d4350a628a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\"" May 17 00:43:04.878581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898065075.mount: Deactivated successfully. May 17 00:43:05.932378 kubelet[2220]: E0517 00:43:05.932329 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:06.167169 env[1335]: time="2025-05-17T00:43:06.167086174Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:06.170232 env[1335]: time="2025-05-17T00:43:06.170161516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:06.172908 env[1335]: time="2025-05-17T00:43:06.172844994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:06.176290 env[1335]: time="2025-05-17T00:43:06.176225128Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:06.177507 env[1335]: time="2025-05-17T00:43:06.177458865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:43:06.179898 env[1335]: time="2025-05-17T00:43:06.179212086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:43:06.207874 env[1335]: time="2025-05-17T00:43:06.207723121Z" level=info msg="CreateContainer within sandbox \"7cea47d5621a316b8069c7a87d2438dd421686e7661e33114e12f3babc9e777e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:43:06.231741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416654673.mount: Deactivated successfully. May 17 00:43:06.237201 env[1335]: time="2025-05-17T00:43:06.237140240Z" level=info msg="CreateContainer within sandbox \"7cea47d5621a316b8069c7a87d2438dd421686e7661e33114e12f3babc9e777e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c0e7fa5cd1b5c7724214c7a75853b1e1fba88a70ae37efb9a9b68861099f79de\"" May 17 00:43:06.238787 env[1335]: time="2025-05-17T00:43:06.238742151Z" level=info msg="StartContainer for \"c0e7fa5cd1b5c7724214c7a75853b1e1fba88a70ae37efb9a9b68861099f79de\"" May 17 00:43:06.361305 env[1335]: time="2025-05-17T00:43:06.358708922Z" level=info msg="StartContainer for \"c0e7fa5cd1b5c7724214c7a75853b1e1fba88a70ae37efb9a9b68861099f79de\" returns successfully" May 17 00:43:07.133400 env[1335]: time="2025-05-17T00:43:07.133331092Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.136511 env[1335]: time="2025-05-17T00:43:07.136461204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.140737 env[1335]: time="2025-05-17T00:43:07.140684527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.147386 env[1335]: time="2025-05-17T00:43:07.147310386Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:07.151379 env[1335]: time="2025-05-17T00:43:07.149776711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:43:07.154101 kubelet[2220]: E0517 00:43:07.153664 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.154101 kubelet[2220]: W0517 00:43:07.153727 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.154101 kubelet[2220]: E0517 00:43:07.153758 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.155731 kubelet[2220]: E0517 00:43:07.155576 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.155731 kubelet[2220]: W0517 00:43:07.155596 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.155731 kubelet[2220]: E0517 00:43:07.155617 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.157502 kubelet[2220]: E0517 00:43:07.156843 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.157502 kubelet[2220]: W0517 00:43:07.156863 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.157502 kubelet[2220]: E0517 00:43:07.156884 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.157502 kubelet[2220]: E0517 00:43:07.157306 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.157502 kubelet[2220]: W0517 00:43:07.157322 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.157502 kubelet[2220]: E0517 00:43:07.157340 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.161347 kubelet[2220]: E0517 00:43:07.161321 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.161553 kubelet[2220]: W0517 00:43:07.161516 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.161709 kubelet[2220]: E0517 00:43:07.161688 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.162751 kubelet[2220]: E0517 00:43:07.162191 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.162751 kubelet[2220]: W0517 00:43:07.162215 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.162751 kubelet[2220]: E0517 00:43:07.162238 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.162751 kubelet[2220]: E0517 00:43:07.162591 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.162751 kubelet[2220]: W0517 00:43:07.162609 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.162751 kubelet[2220]: E0517 00:43:07.162626 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.163107 env[1335]: time="2025-05-17T00:43:07.162989388Z" level=info msg="CreateContainer within sandbox \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:43:07.163485 kubelet[2220]: E0517 00:43:07.163460 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.163692 kubelet[2220]: W0517 00:43:07.163485 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.163692 kubelet[2220]: E0517 00:43:07.163505 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.165897 kubelet[2220]: E0517 00:43:07.165875 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.166088 kubelet[2220]: W0517 00:43:07.166032 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.166088 kubelet[2220]: E0517 00:43:07.166066 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.166636 kubelet[2220]: E0517 00:43:07.166454 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.166636 kubelet[2220]: W0517 00:43:07.166475 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.166636 kubelet[2220]: E0517 00:43:07.166494 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.167199 kubelet[2220]: E0517 00:43:07.167018 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.167199 kubelet[2220]: W0517 00:43:07.167037 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.167199 kubelet[2220]: E0517 00:43:07.167057 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.167488 kubelet[2220]: E0517 00:43:07.167407 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.167488 kubelet[2220]: W0517 00:43:07.167439 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.167488 kubelet[2220]: E0517 00:43:07.167468 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.167817 kubelet[2220]: E0517 00:43:07.167794 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.167947 kubelet[2220]: W0517 00:43:07.167819 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.167947 kubelet[2220]: E0517 00:43:07.167838 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.168160 kubelet[2220]: E0517 00:43:07.168137 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.168254 kubelet[2220]: W0517 00:43:07.168160 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.168254 kubelet[2220]: E0517 00:43:07.168179 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.168519 kubelet[2220]: E0517 00:43:07.168500 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.168610 kubelet[2220]: W0517 00:43:07.168522 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.168610 kubelet[2220]: E0517 00:43:07.168541 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.191832 env[1335]: time="2025-05-17T00:43:07.191711067Z" level=info msg="CreateContainer within sandbox \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310\"" May 17 00:43:07.194928 env[1335]: time="2025-05-17T00:43:07.193983919Z" level=info msg="StartContainer for \"6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310\"" May 17 00:43:07.214164 kubelet[2220]: E0517 00:43:07.213467 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.214164 kubelet[2220]: W0517 00:43:07.213491 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.214164 kubelet[2220]: E0517 00:43:07.213519 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.214164 kubelet[2220]: E0517 00:43:07.213960 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.214164 kubelet[2220]: W0517 00:43:07.213975 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.214164 kubelet[2220]: E0517 00:43:07.214001 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.215999 kubelet[2220]: E0517 00:43:07.215587 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.215999 kubelet[2220]: W0517 00:43:07.215670 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.215999 kubelet[2220]: E0517 00:43:07.215733 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.216498 kubelet[2220]: E0517 00:43:07.216456 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.216498 kubelet[2220]: W0517 00:43:07.216483 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.216757 kubelet[2220]: E0517 00:43:07.216706 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.217108 kubelet[2220]: E0517 00:43:07.217071 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.217108 kubelet[2220]: W0517 00:43:07.217095 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.217282 kubelet[2220]: E0517 00:43:07.217242 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.217652 kubelet[2220]: E0517 00:43:07.217498 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.217652 kubelet[2220]: W0517 00:43:07.217518 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.218253 kubelet[2220]: E0517 00:43:07.218226 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.218766 kubelet[2220]: E0517 00:43:07.218554 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.218766 kubelet[2220]: W0517 00:43:07.218572 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.218766 kubelet[2220]: E0517 00:43:07.218597 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.219451 kubelet[2220]: E0517 00:43:07.219396 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.219747 kubelet[2220]: W0517 00:43:07.219416 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.219889 kubelet[2220]: E0517 00:43:07.219747 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.220021 kubelet[2220]: E0517 00:43:07.219995 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.220126 kubelet[2220]: W0517 00:43:07.220022 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.220208 kubelet[2220]: E0517 00:43:07.220157 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.220627 kubelet[2220]: E0517 00:43:07.220395 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.220627 kubelet[2220]: W0517 00:43:07.220414 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.220889 kubelet[2220]: E0517 00:43:07.220625 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.220974 kubelet[2220]: E0517 00:43:07.220898 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.220974 kubelet[2220]: W0517 00:43:07.220913 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.220974 kubelet[2220]: E0517 00:43:07.220939 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.221881 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.223662 kubelet[2220]: W0517 00:43:07.221901 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.221982 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.222298 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.223662 kubelet[2220]: W0517 00:43:07.222313 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.222477 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.222724 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.223662 kubelet[2220]: W0517 00:43:07.222738 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.222759 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.223662 kubelet[2220]: E0517 00:43:07.223140 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.225960 kubelet[2220]: W0517 00:43:07.223154 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.225960 kubelet[2220]: E0517 00:43:07.223177 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.225960 kubelet[2220]: E0517 00:43:07.223784 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.225960 kubelet[2220]: W0517 00:43:07.223800 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.225960 kubelet[2220]: E0517 00:43:07.223959 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.225960 kubelet[2220]: E0517 00:43:07.224210 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.225960 kubelet[2220]: W0517 00:43:07.224225 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.225960 kubelet[2220]: E0517 00:43:07.224246 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.225960 kubelet[2220]: E0517 00:43:07.225539 2220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:43:07.225960 kubelet[2220]: W0517 00:43:07.225562 2220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:43:07.226634 kubelet[2220]: E0517 00:43:07.225583 2220 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:43:07.257911 systemd[1]: run-containerd-runc-k8s.io-6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310-runc.ZdFzZN.mount: Deactivated successfully. May 17 00:43:07.321524 env[1335]: time="2025-05-17T00:43:07.318586520Z" level=info msg="StartContainer for \"6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310\" returns successfully" May 17 00:43:07.374583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310-rootfs.mount: Deactivated successfully. May 17 00:43:07.932244 kubelet[2220]: E0517 00:43:07.932148 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:08.093995 env[1335]: time="2025-05-17T00:43:08.093922068Z" level=info msg="shim disconnected" id=6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310 May 17 00:43:08.093995 env[1335]: time="2025-05-17T00:43:08.093997373Z" level=warning msg="cleaning up after shim disconnected" id=6b2291f72ce427246694188e32b90ae1683c19ab1e97fa4f26dcdb9de2297310 namespace=k8s.io May 17 00:43:08.094480 env[1335]: time="2025-05-17T00:43:08.094015903Z" level=info msg="cleaning up dead shim" May 17 00:43:08.107493 env[1335]: time="2025-05-17T00:43:08.107414452Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2998 runtime=io.containerd.runc.v2\n" May 17 00:43:08.145467 kubelet[2220]: I0517 00:43:08.145285 2220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:08.149022 env[1335]: time="2025-05-17T00:43:08.148935755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:43:08.170389 kubelet[2220]: I0517 00:43:08.170314 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b5d8b7ddd-7fjrt" podStartSLOduration=4.778177874 podStartE2EDuration="7.170290315s" podCreationTimestamp="2025-05-17 00:43:01 +0000 UTC" firstStartedPulling="2025-05-17 00:43:03.786925278 +0000 UTC m=+23.071254785" lastFinishedPulling="2025-05-17 00:43:06.179037729 +0000 UTC m=+25.463367226" observedRunningTime="2025-05-17 00:43:07.16082698 +0000 UTC m=+26.445156500" watchObservedRunningTime="2025-05-17 00:43:08.170290315 +0000 UTC m=+27.454619838" May 17 00:43:09.932379 kubelet[2220]: E0517 00:43:09.932304 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:11.660354 env[1335]: time="2025-05-17T00:43:11.660254696Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:11.663964 env[1335]: time="2025-05-17T00:43:11.663897944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:11.667282 env[1335]: time="2025-05-17T00:43:11.667222713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:11.669208 env[1335]: time="2025-05-17T00:43:11.669143408Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:11.670293 env[1335]: time="2025-05-17T00:43:11.670240768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:43:11.675292 env[1335]: time="2025-05-17T00:43:11.675239753Z" level=info msg="CreateContainer within sandbox \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:43:11.706221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212011824.mount: Deactivated successfully. May 17 00:43:11.714355 env[1335]: time="2025-05-17T00:43:11.714306400Z" level=info msg="CreateContainer within sandbox \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755\"" May 17 00:43:11.715535 env[1335]: time="2025-05-17T00:43:11.715346680Z" level=info msg="StartContainer for \"8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755\"" May 17 00:43:11.779059 systemd[1]: run-containerd-runc-k8s.io-8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755-runc.tvhBg1.mount: Deactivated successfully. May 17 00:43:11.846481 env[1335]: time="2025-05-17T00:43:11.846366315Z" level=info msg="StartContainer for \"8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755\" returns successfully" May 17 00:43:11.933171 kubelet[2220]: E0517 00:43:11.931943 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:12.765097 kubelet[2220]: I0517 00:43:12.765040 2220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:12.826000 audit[3055]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:12.844451 kernel: audit: type=1325 audit(1747442592.826:284): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:12.826000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4b0294b0 a2=0 a3=7ffc4b02949c items=0 ppid=2367 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:12.878461 kernel: audit: type=1300 audit(1747442592.826:284): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4b0294b0 a2=0 a3=7ffc4b02949c items=0 ppid=2367 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:12.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:12.898455 kernel: audit: type=1327 audit(1747442592.826:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:12.880000 audit[3055]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:12.916972 kernel: audit: type=1325 audit(1747442592.880:285): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:12.917182 kernel: audit: type=1300 audit(1747442592.880:285): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc4b0294b0 a2=0 a3=7ffc4b02949c items=0 ppid=2367 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:12.880000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc4b0294b0 a2=0 a3=7ffc4b02949c items=0 ppid=2367 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:12.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:12.968045 kernel: audit: type=1327 audit(1747442592.880:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:13.121747 env[1335]: time="2025-05-17T00:43:13.121590211Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:43:13.149001 kubelet[2220]: I0517 00:43:13.147575 2220 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:43:13.178444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755-rootfs.mount: Deactivated successfully. May 17 00:43:13.293990 kubelet[2220]: I0517 00:43:13.293941 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13d7a51f-4298-4a96-98ab-641457e5522e-config-volume\") pod \"coredns-7c65d6cfc9-szlbf\" (UID: \"13d7a51f-4298-4a96-98ab-641457e5522e\") " pod="kube-system/coredns-7c65d6cfc9-szlbf" May 17 00:43:13.295332 kubelet[2220]: I0517 00:43:13.295297 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67d969c1-4d93-44a9-a00c-87eca6fdadfb-config\") pod \"goldmane-8f77d7b6c-4qrzt\" (UID: \"67d969c1-4d93-44a9-a00c-87eca6fdadfb\") " pod="calico-system/goldmane-8f77d7b6c-4qrzt" May 17 00:43:13.295598 kubelet[2220]: I0517 00:43:13.295572 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fgv\" (UniqueName: \"kubernetes.io/projected/877258a8-70e4-4a88-a629-d7c04d184c1d-kube-api-access-w5fgv\") pod \"calico-kube-controllers-7f7cfb968-xnd89\" (UID: \"877258a8-70e4-4a88-a629-d7c04d184c1d\") " pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" May 17 00:43:13.295746 kubelet[2220]: I0517 00:43:13.295722 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8crn8\" (UniqueName: \"kubernetes.io/projected/9ab41193-998e-4949-bcb0-dcdfdf7aa08f-kube-api-access-8crn8\") pod \"calico-apiserver-bf9b9cc9-xp6vq\" (UID: \"9ab41193-998e-4949-bcb0-dcdfdf7aa08f\") " pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" May 17 00:43:13.295942 kubelet[2220]: I0517 00:43:13.295920 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwtt\" (UniqueName: \"kubernetes.io/projected/13d7a51f-4298-4a96-98ab-641457e5522e-kube-api-access-jjwtt\") pod \"coredns-7c65d6cfc9-szlbf\" (UID: \"13d7a51f-4298-4a96-98ab-641457e5522e\") " pod="kube-system/coredns-7c65d6cfc9-szlbf" May 17 00:43:13.296088 kubelet[2220]: I0517 00:43:13.296063 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx976\" (UniqueName: \"kubernetes.io/projected/67d969c1-4d93-44a9-a00c-87eca6fdadfb-kube-api-access-zx976\") pod \"goldmane-8f77d7b6c-4qrzt\" (UID: \"67d969c1-4d93-44a9-a00c-87eca6fdadfb\") " pod="calico-system/goldmane-8f77d7b6c-4qrzt" May 17 00:43:13.296219 kubelet[2220]: I0517 00:43:13.296197 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b0f4094-a401-446d-a3d3-fefd3d968c34-config-volume\") pod \"coredns-7c65d6cfc9-wllmd\" (UID: \"2b0f4094-a401-446d-a3d3-fefd3d968c34\") " pod="kube-system/coredns-7c65d6cfc9-wllmd" May 17 00:43:13.296371 kubelet[2220]: I0517 00:43:13.296339 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw94d\" (UniqueName: \"kubernetes.io/projected/2b0f4094-a401-446d-a3d3-fefd3d968c34-kube-api-access-vw94d\") pod \"coredns-7c65d6cfc9-wllmd\" (UID: \"2b0f4094-a401-446d-a3d3-fefd3d968c34\") " pod="kube-system/coredns-7c65d6cfc9-wllmd" May 17 00:43:13.296553 kubelet[2220]: I0517 00:43:13.296527 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/877258a8-70e4-4a88-a629-d7c04d184c1d-tigera-ca-bundle\") pod \"calico-kube-controllers-7f7cfb968-xnd89\" (UID: \"877258a8-70e4-4a88-a629-d7c04d184c1d\") " pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" May 17 00:43:13.296718 kubelet[2220]: I0517 00:43:13.296692 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ab41193-998e-4949-bcb0-dcdfdf7aa08f-calico-apiserver-certs\") pod \"calico-apiserver-bf9b9cc9-xp6vq\" (UID: \"9ab41193-998e-4949-bcb0-dcdfdf7aa08f\") " pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" May 17 00:43:13.299464 kubelet[2220]: I0517 00:43:13.299017 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/67d969c1-4d93-44a9-a00c-87eca6fdadfb-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-4qrzt\" (UID: \"67d969c1-4d93-44a9-a00c-87eca6fdadfb\") " pod="calico-system/goldmane-8f77d7b6c-4qrzt" May 17 00:43:13.299464 kubelet[2220]: I0517 00:43:13.299068 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67d969c1-4d93-44a9-a00c-87eca6fdadfb-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-4qrzt\" (UID: \"67d969c1-4d93-44a9-a00c-87eca6fdadfb\") " pod="calico-system/goldmane-8f77d7b6c-4qrzt" May 17 00:43:13.400346 kubelet[2220]: I0517 00:43:13.400198 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-backend-key-pair\") pod \"whisker-7bcc67d899-smqwx\" (UID: \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\") " pod="calico-system/whisker-7bcc67d899-smqwx" May 17 00:43:13.400674 kubelet[2220]: I0517 00:43:13.400645 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-ca-bundle\") pod \"whisker-7bcc67d899-smqwx\" (UID: \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\") " pod="calico-system/whisker-7bcc67d899-smqwx" May 17 00:43:13.400821 kubelet[2220]: I0517 00:43:13.400798 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77b2b\" (UniqueName: \"kubernetes.io/projected/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-kube-api-access-77b2b\") pod \"whisker-7bcc67d899-smqwx\" (UID: \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\") " pod="calico-system/whisker-7bcc67d899-smqwx" May 17 00:43:13.401270 kubelet[2220]: I0517 00:43:13.401240 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/05a2ff53-9d59-4c60-9b54-47fa2348be26-calico-apiserver-certs\") pod \"calico-apiserver-bf9b9cc9-h6zp7\" (UID: \"05a2ff53-9d59-4c60-9b54-47fa2348be26\") " pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" May 17 00:43:13.402217 kubelet[2220]: I0517 00:43:13.401480 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65fnp\" (UniqueName: \"kubernetes.io/projected/05a2ff53-9d59-4c60-9b54-47fa2348be26-kube-api-access-65fnp\") pod \"calico-apiserver-bf9b9cc9-h6zp7\" (UID: \"05a2ff53-9d59-4c60-9b54-47fa2348be26\") " pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" May 17 00:43:13.561954 env[1335]: time="2025-05-17T00:43:13.561338802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-4qrzt,Uid:67d969c1-4d93-44a9-a00c-87eca6fdadfb,Namespace:calico-system,Attempt:0,}" May 17 00:43:13.580585 env[1335]: time="2025-05-17T00:43:13.580383847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wllmd,Uid:2b0f4094-a401-446d-a3d3-fefd3d968c34,Namespace:kube-system,Attempt:0,}" May 17 00:43:13.608384 env[1335]: time="2025-05-17T00:43:13.608314880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-xp6vq,Uid:9ab41193-998e-4949-bcb0-dcdfdf7aa08f,Namespace:calico-apiserver,Attempt:0,}" May 17 00:43:13.629065 env[1335]: time="2025-05-17T00:43:13.629003943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcc67d899-smqwx,Uid:3e13e7b4-97cd-418e-853c-b1f9053a0bd8,Namespace:calico-system,Attempt:0,}" May 17 00:43:13.629985 env[1335]: time="2025-05-17T00:43:13.629941617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-h6zp7,Uid:05a2ff53-9d59-4c60-9b54-47fa2348be26,Namespace:calico-apiserver,Attempt:0,}" May 17 00:43:13.663914 env[1335]: time="2025-05-17T00:43:13.663546599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7cfb968-xnd89,Uid:877258a8-70e4-4a88-a629-d7c04d184c1d,Namespace:calico-system,Attempt:0,}" May 17 00:43:13.741141 env[1335]: time="2025-05-17T00:43:13.741074765Z" level=info msg="shim disconnected" id=8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755 May 17 00:43:13.741555 env[1335]: time="2025-05-17T00:43:13.741520064Z" level=warning msg="cleaning up after shim disconnected" id=8e829b23cd04196821e4de7a289f8be0966826536174057ece5dadc10453a755 namespace=k8s.io May 17 00:43:13.741696 env[1335]: time="2025-05-17T00:43:13.741671233Z" level=info msg="cleaning up dead shim" May 17 00:43:13.756389 env[1335]: time="2025-05-17T00:43:13.756335255Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3082 runtime=io.containerd.runc.v2\n" May 17 00:43:13.830311 env[1335]: time="2025-05-17T00:43:13.830249711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-szlbf,Uid:13d7a51f-4298-4a96-98ab-641457e5522e,Namespace:kube-system,Attempt:0,}" May 17 00:43:13.972932 env[1335]: time="2025-05-17T00:43:13.972755164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64ww9,Uid:603f05d6-04eb-4ce3-baf0-5f232fe52221,Namespace:calico-system,Attempt:0,}" May 17 00:43:14.156035 env[1335]: time="2025-05-17T00:43:14.155910057Z" level=error msg="Failed to destroy network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.157352 env[1335]: time="2025-05-17T00:43:14.157292207Z" level=error msg="encountered an error cleaning up failed sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.157616 env[1335]: time="2025-05-17T00:43:14.157565764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-h6zp7,Uid:05a2ff53-9d59-4c60-9b54-47fa2348be26,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.161555 kubelet[2220]: E0517 00:43:14.158181 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.161555 kubelet[2220]: E0517 00:43:14.158304 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" May 17 00:43:14.161555 kubelet[2220]: E0517 00:43:14.158357 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" May 17 00:43:14.163333 kubelet[2220]: E0517 00:43:14.158558 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bf9b9cc9-h6zp7_calico-apiserver(05a2ff53-9d59-4c60-9b54-47fa2348be26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bf9b9cc9-h6zp7_calico-apiserver(05a2ff53-9d59-4c60-9b54-47fa2348be26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" podUID="05a2ff53-9d59-4c60-9b54-47fa2348be26" May 17 00:43:14.190037 env[1335]: time="2025-05-17T00:43:14.186624708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:43:14.217192 kubelet[2220]: I0517 00:43:14.215300 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:14.224548 env[1335]: time="2025-05-17T00:43:14.223659523Z" level=info msg="StopPodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\"" May 17 00:43:14.312875 env[1335]: time="2025-05-17T00:43:14.312791586Z" level=error msg="Failed to destroy network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.318210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b-shm.mount: Deactivated successfully. May 17 00:43:14.326579 env[1335]: time="2025-05-17T00:43:14.321397980Z" level=error msg="encountered an error cleaning up failed sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.326579 env[1335]: time="2025-05-17T00:43:14.321541220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wllmd,Uid:2b0f4094-a401-446d-a3d3-fefd3d968c34,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.327026 kubelet[2220]: E0517 00:43:14.321848 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.327026 kubelet[2220]: E0517 00:43:14.321937 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wllmd" May 17 00:43:14.327026 kubelet[2220]: E0517 00:43:14.321986 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wllmd" May 17 00:43:14.329860 kubelet[2220]: E0517 00:43:14.322042 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wllmd_kube-system(2b0f4094-a401-446d-a3d3-fefd3d968c34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wllmd_kube-system(2b0f4094-a401-446d-a3d3-fefd3d968c34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wllmd" podUID="2b0f4094-a401-446d-a3d3-fefd3d968c34" May 17 00:43:14.331668 env[1335]: time="2025-05-17T00:43:14.331589180Z" level=error msg="Failed to destroy network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.334036 env[1335]: time="2025-05-17T00:43:14.333954746Z" level=error msg="encountered an error cleaning up failed sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.334461 env[1335]: time="2025-05-17T00:43:14.334392703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-szlbf,Uid:13d7a51f-4298-4a96-98ab-641457e5522e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.335754 kubelet[2220]: E0517 00:43:14.334933 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.335754 kubelet[2220]: E0517 00:43:14.335044 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-szlbf" May 17 00:43:14.335754 kubelet[2220]: E0517 00:43:14.335106 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-szlbf" May 17 00:43:14.336232 kubelet[2220]: E0517 00:43:14.335219 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-szlbf_kube-system(13d7a51f-4298-4a96-98ab-641457e5522e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-szlbf_kube-system(13d7a51f-4298-4a96-98ab-641457e5522e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-szlbf" podUID="13d7a51f-4298-4a96-98ab-641457e5522e" May 17 00:43:14.345621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783-shm.mount: Deactivated successfully. May 17 00:43:14.389700 env[1335]: time="2025-05-17T00:43:14.389611159Z" level=error msg="Failed to destroy network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.399658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6-shm.mount: Deactivated successfully. May 17 00:43:14.403342 env[1335]: time="2025-05-17T00:43:14.403263591Z" level=error msg="encountered an error cleaning up failed sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.403517 env[1335]: time="2025-05-17T00:43:14.403371196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bcc67d899-smqwx,Uid:3e13e7b4-97cd-418e-853c-b1f9053a0bd8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.403721 kubelet[2220]: E0517 00:43:14.403671 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.403844 kubelet[2220]: E0517 00:43:14.403762 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bcc67d899-smqwx" May 17 00:43:14.403844 kubelet[2220]: E0517 00:43:14.403798 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bcc67d899-smqwx" May 17 00:43:14.405730 kubelet[2220]: E0517 00:43:14.403876 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bcc67d899-smqwx_calico-system(3e13e7b4-97cd-418e-853c-b1f9053a0bd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bcc67d899-smqwx_calico-system(3e13e7b4-97cd-418e-853c-b1f9053a0bd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bcc67d899-smqwx" podUID="3e13e7b4-97cd-418e-853c-b1f9053a0bd8" May 17 00:43:14.416505 env[1335]: time="2025-05-17T00:43:14.416365300Z" level=error msg="StopPodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" failed" error="failed to destroy network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.416780 kubelet[2220]: E0517 00:43:14.416726 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:14.416971 kubelet[2220]: E0517 00:43:14.416806 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e"} May 17 00:43:14.416971 kubelet[2220]: E0517 00:43:14.416900 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05a2ff53-9d59-4c60-9b54-47fa2348be26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:14.416971 kubelet[2220]: E0517 00:43:14.416947 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05a2ff53-9d59-4c60-9b54-47fa2348be26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" podUID="05a2ff53-9d59-4c60-9b54-47fa2348be26" May 17 00:43:14.429741 env[1335]: time="2025-05-17T00:43:14.429662643Z" level=error msg="Failed to destroy network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.440798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b-shm.mount: Deactivated successfully. May 17 00:43:14.445990 env[1335]: time="2025-05-17T00:43:14.445894194Z" level=error msg="encountered an error cleaning up failed sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.446132 env[1335]: time="2025-05-17T00:43:14.446018543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7cfb968-xnd89,Uid:877258a8-70e4-4a88-a629-d7c04d184c1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.446352 kubelet[2220]: E0517 00:43:14.446300 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.446515 kubelet[2220]: E0517 00:43:14.446387 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" May 17 00:43:14.446515 kubelet[2220]: E0517 00:43:14.446445 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" May 17 00:43:14.446658 kubelet[2220]: E0517 00:43:14.446505 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f7cfb968-xnd89_calico-system(877258a8-70e4-4a88-a629-d7c04d184c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f7cfb968-xnd89_calico-system(877258a8-70e4-4a88-a629-d7c04d184c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" podUID="877258a8-70e4-4a88-a629-d7c04d184c1d" May 17 00:43:14.449318 env[1335]: time="2025-05-17T00:43:14.449247935Z" level=error msg="Failed to destroy network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.450115 env[1335]: time="2025-05-17T00:43:14.450044525Z" level=error msg="encountered an error cleaning up failed sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.450251 env[1335]: time="2025-05-17T00:43:14.450145184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-4qrzt,Uid:67d969c1-4d93-44a9-a00c-87eca6fdadfb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.450474 kubelet[2220]: E0517 00:43:14.450399 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.450601 kubelet[2220]: E0517 00:43:14.450500 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-4qrzt" May 17 00:43:14.450601 kubelet[2220]: E0517 00:43:14.450543 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-4qrzt" May 17 00:43:14.450752 kubelet[2220]: E0517 00:43:14.450605 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-4qrzt_calico-system(67d969c1-4d93-44a9-a00c-87eca6fdadfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-4qrzt_calico-system(67d969c1-4d93-44a9-a00c-87eca6fdadfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:43:14.461038 env[1335]: time="2025-05-17T00:43:14.460960546Z" level=error msg="Failed to destroy network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.461539 env[1335]: time="2025-05-17T00:43:14.461485572Z" level=error msg="encountered an error cleaning up failed sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.461654 env[1335]: time="2025-05-17T00:43:14.461576314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-xp6vq,Uid:9ab41193-998e-4949-bcb0-dcdfdf7aa08f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.461867 kubelet[2220]: E0517 00:43:14.461808 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.461999 kubelet[2220]: E0517 00:43:14.461896 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" May 17 00:43:14.461999 kubelet[2220]: E0517 00:43:14.461931 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" May 17 00:43:14.462142 kubelet[2220]: E0517 00:43:14.461984 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bf9b9cc9-xp6vq_calico-apiserver(9ab41193-998e-4949-bcb0-dcdfdf7aa08f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bf9b9cc9-xp6vq_calico-apiserver(9ab41193-998e-4949-bcb0-dcdfdf7aa08f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" podUID="9ab41193-998e-4949-bcb0-dcdfdf7aa08f" May 17 00:43:14.483374 env[1335]: time="2025-05-17T00:43:14.479937852Z" level=error msg="Failed to destroy network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.483374 env[1335]: time="2025-05-17T00:43:14.480484525Z" level=error msg="encountered an error cleaning up failed sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.483374 env[1335]: time="2025-05-17T00:43:14.480759515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64ww9,Uid:603f05d6-04eb-4ce3-baf0-5f232fe52221,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.483957 kubelet[2220]: E0517 00:43:14.481231 2220 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:14.483957 kubelet[2220]: E0517 00:43:14.481296 2220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-64ww9" May 17 00:43:14.483957 kubelet[2220]: E0517 00:43:14.481364 2220 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-64ww9" May 17 00:43:14.484185 kubelet[2220]: E0517 00:43:14.481440 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-64ww9_calico-system(603f05d6-04eb-4ce3-baf0-5f232fe52221)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-64ww9_calico-system(603f05d6-04eb-4ce3-baf0-5f232fe52221)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:15.178799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04-shm.mount: Deactivated successfully. May 17 00:43:15.179072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e-shm.mount: Deactivated successfully. May 17 00:43:15.179253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a-shm.mount: Deactivated successfully. May 17 00:43:15.227384 kubelet[2220]: I0517 00:43:15.222805 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:15.228572 env[1335]: time="2025-05-17T00:43:15.228526989Z" level=info msg="StopPodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\"" May 17 00:43:15.248386 kubelet[2220]: I0517 00:43:15.244185 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:15.249095 env[1335]: time="2025-05-17T00:43:15.249053626Z" level=info msg="StopPodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\"" May 17 00:43:15.256546 kubelet[2220]: I0517 00:43:15.254313 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:15.261285 env[1335]: time="2025-05-17T00:43:15.261224927Z" level=info msg="StopPodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\"" May 17 00:43:15.272984 kubelet[2220]: I0517 00:43:15.270590 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:15.279461 env[1335]: time="2025-05-17T00:43:15.279385967Z" level=info msg="StopPodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\"" May 17 00:43:15.283547 kubelet[2220]: I0517 00:43:15.281310 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:15.290788 env[1335]: time="2025-05-17T00:43:15.285512813Z" level=info msg="StopPodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\"" May 17 00:43:15.319703 kubelet[2220]: I0517 00:43:15.318032 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:15.323606 env[1335]: time="2025-05-17T00:43:15.323539692Z" level=info msg="StopPodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\"" May 17 00:43:15.371740 env[1335]: time="2025-05-17T00:43:15.371665213Z" level=error msg="StopPodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" failed" error="failed to destroy network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.376865 kubelet[2220]: E0517 00:43:15.376556 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:15.376865 kubelet[2220]: E0517 00:43:15.376638 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e"} May 17 00:43:15.376865 kubelet[2220]: E0517 00:43:15.376700 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ab41193-998e-4949-bcb0-dcdfdf7aa08f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.376865 kubelet[2220]: E0517 00:43:15.376751 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ab41193-998e-4949-bcb0-dcdfdf7aa08f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" podUID="9ab41193-998e-4949-bcb0-dcdfdf7aa08f" May 17 00:43:15.380201 kubelet[2220]: I0517 00:43:15.379218 2220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:15.389212 env[1335]: time="2025-05-17T00:43:15.389160585Z" level=info msg="StopPodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\"" May 17 00:43:15.392276 env[1335]: time="2025-05-17T00:43:15.392201813Z" level=error msg="StopPodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" failed" error="failed to destroy network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.393458 kubelet[2220]: E0517 00:43:15.393162 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:15.393458 kubelet[2220]: E0517 00:43:15.393230 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b"} May 17 00:43:15.393458 kubelet[2220]: E0517 00:43:15.393283 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b0f4094-a401-446d-a3d3-fefd3d968c34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.393458 kubelet[2220]: E0517 00:43:15.393341 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b0f4094-a401-446d-a3d3-fefd3d968c34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wllmd" podUID="2b0f4094-a401-446d-a3d3-fefd3d968c34" May 17 00:43:15.528663 env[1335]: time="2025-05-17T00:43:15.526465278Z" level=error msg="StopPodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" failed" error="failed to destroy network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.528869 kubelet[2220]: E0517 00:43:15.528387 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:15.528869 kubelet[2220]: E0517 00:43:15.528471 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783"} May 17 00:43:15.528869 kubelet[2220]: E0517 00:43:15.528538 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13d7a51f-4298-4a96-98ab-641457e5522e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.528869 kubelet[2220]: E0517 00:43:15.528575 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13d7a51f-4298-4a96-98ab-641457e5522e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-szlbf" podUID="13d7a51f-4298-4a96-98ab-641457e5522e" May 17 00:43:15.534927 env[1335]: time="2025-05-17T00:43:15.534844157Z" level=error msg="StopPodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" failed" error="failed to destroy network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.535662 kubelet[2220]: E0517 00:43:15.535395 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:15.535662 kubelet[2220]: E0517 00:43:15.535496 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b"} May 17 00:43:15.535662 kubelet[2220]: E0517 00:43:15.535551 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"877258a8-70e4-4a88-a629-d7c04d184c1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.535662 kubelet[2220]: E0517 00:43:15.535589 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"877258a8-70e4-4a88-a629-d7c04d184c1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" podUID="877258a8-70e4-4a88-a629-d7c04d184c1d" May 17 00:43:15.537545 env[1335]: time="2025-05-17T00:43:15.537475278Z" level=error msg="StopPodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" failed" error="failed to destroy network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.537999 kubelet[2220]: E0517 00:43:15.537806 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:15.537999 kubelet[2220]: E0517 00:43:15.537859 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04"} May 17 00:43:15.537999 kubelet[2220]: E0517 00:43:15.537909 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"603f05d6-04eb-4ce3-baf0-5f232fe52221\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.537999 kubelet[2220]: E0517 00:43:15.537943 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"603f05d6-04eb-4ce3-baf0-5f232fe52221\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-64ww9" podUID="603f05d6-04eb-4ce3-baf0-5f232fe52221" May 17 00:43:15.552181 env[1335]: time="2025-05-17T00:43:15.552098633Z" level=error msg="StopPodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" failed" error="failed to destroy network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.552904 kubelet[2220]: E0517 00:43:15.552668 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:15.552904 kubelet[2220]: E0517 00:43:15.552733 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a"} May 17 00:43:15.552904 kubelet[2220]: E0517 00:43:15.552790 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67d969c1-4d93-44a9-a00c-87eca6fdadfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.552904 kubelet[2220]: E0517 00:43:15.552830 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67d969c1-4d93-44a9-a00c-87eca6fdadfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:43:15.564203 env[1335]: time="2025-05-17T00:43:15.564134029Z" level=error msg="StopPodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" failed" error="failed to destroy network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:43:15.564810 kubelet[2220]: E0517 00:43:15.564643 2220 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:15.564810 kubelet[2220]: E0517 00:43:15.564705 2220 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6"} May 17 00:43:15.565156 kubelet[2220]: E0517 00:43:15.564761 2220 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:43:15.565156 kubelet[2220]: E0517 00:43:15.565106 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bcc67d899-smqwx" podUID="3e13e7b4-97cd-418e-853c-b1f9053a0bd8" May 17 00:43:21.897488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817797456.mount: Deactivated successfully. May 17 00:43:21.934162 env[1335]: time="2025-05-17T00:43:21.934083850Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:21.937135 env[1335]: time="2025-05-17T00:43:21.937079060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:21.939538 env[1335]: time="2025-05-17T00:43:21.939489933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:21.942482 env[1335]: time="2025-05-17T00:43:21.942410878Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:21.943612 env[1335]: time="2025-05-17T00:43:21.943562988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:43:21.974016 env[1335]: time="2025-05-17T00:43:21.973957653Z" level=info msg="CreateContainer within sandbox \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:43:22.006180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754658334.mount: Deactivated successfully. May 17 00:43:22.013781 env[1335]: time="2025-05-17T00:43:22.013683407Z" level=info msg="CreateContainer within sandbox \"f4a025588b099cbe70c9351ed317b64e7f1040d55f803749286b09435021f5b8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a8846f4a61e0929895aa5693269ce1a469c915109549821327692773f69d840\"" May 17 00:43:22.016408 env[1335]: time="2025-05-17T00:43:22.014882381Z" level=info msg="StartContainer for \"4a8846f4a61e0929895aa5693269ce1a469c915109549821327692773f69d840\"" May 17 00:43:22.112766 env[1335]: time="2025-05-17T00:43:22.112685693Z" level=info msg="StartContainer for \"4a8846f4a61e0929895aa5693269ce1a469c915109549821327692773f69d840\" returns successfully" May 17 00:43:22.260340 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:43:22.260573 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:43:22.387257 env[1335]: time="2025-05-17T00:43:22.387181911Z" level=info msg="StopPodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\"" May 17 00:43:22.599160 kubelet[2220]: I0517 00:43:22.599057 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n2l6j" podStartSLOduration=2.640146467 podStartE2EDuration="20.599030553s" podCreationTimestamp="2025-05-17 00:43:02 +0000 UTC" firstStartedPulling="2025-05-17 00:43:03.986367005 +0000 UTC m=+23.270696513" lastFinishedPulling="2025-05-17 00:43:21.945251099 +0000 UTC m=+41.229580599" observedRunningTime="2025-05-17 00:43:22.457185767 +0000 UTC m=+41.741515284" watchObservedRunningTime="2025-05-17 00:43:22.599030553 +0000 UTC m=+41.883360073" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.602 [INFO][3492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.603 [INFO][3492] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" iface="eth0" netns="/var/run/netns/cni-7ee9bee6-e67e-f1ea-3c8d-2451de9e4bc3" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.603 [INFO][3492] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" iface="eth0" netns="/var/run/netns/cni-7ee9bee6-e67e-f1ea-3c8d-2451de9e4bc3" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.603 [INFO][3492] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" iface="eth0" netns="/var/run/netns/cni-7ee9bee6-e67e-f1ea-3c8d-2451de9e4bc3" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.603 [INFO][3492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.603 [INFO][3492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.659 [INFO][3519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.660 [INFO][3519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.660 [INFO][3519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.681 [WARNING][3519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.681 [INFO][3519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.691 [INFO][3519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:22.699355 env[1335]: 2025-05-17 00:43:22.696 [INFO][3492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:22.700523 env[1335]: time="2025-05-17T00:43:22.699895178Z" level=info msg="TearDown network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" successfully" May 17 00:43:22.700523 env[1335]: time="2025-05-17T00:43:22.699949449Z" level=info msg="StopPodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" returns successfully" May 17 00:43:22.823395 kubelet[2220]: I0517 00:43:22.823334 2220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-backend-key-pair\") pod \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\" (UID: \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\") " May 17 00:43:22.823652 kubelet[2220]: I0517 00:43:22.823442 2220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77b2b\" (UniqueName: \"kubernetes.io/projected/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-kube-api-access-77b2b\") pod \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\" (UID: \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\") " May 17 00:43:22.823652 kubelet[2220]: I0517 00:43:22.823486 2220 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-ca-bundle\") pod \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\" (UID: \"3e13e7b4-97cd-418e-853c-b1f9053a0bd8\") " May 17 00:43:22.824622 kubelet[2220]: I0517 00:43:22.824555 2220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3e13e7b4-97cd-418e-853c-b1f9053a0bd8" (UID: "3e13e7b4-97cd-418e-853c-b1f9053a0bd8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:43:22.835703 kubelet[2220]: I0517 00:43:22.835636 2220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-kube-api-access-77b2b" (OuterVolumeSpecName: "kube-api-access-77b2b") pod "3e13e7b4-97cd-418e-853c-b1f9053a0bd8" (UID: "3e13e7b4-97cd-418e-853c-b1f9053a0bd8"). InnerVolumeSpecName "kube-api-access-77b2b". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:43:22.835900 kubelet[2220]: I0517 00:43:22.835846 2220 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3e13e7b4-97cd-418e-853c-b1f9053a0bd8" (UID: "3e13e7b4-97cd-418e-853c-b1f9053a0bd8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:43:22.899810 systemd[1]: run-netns-cni\x2d7ee9bee6\x2de67e\x2df1ea\x2d3c8d\x2d2451de9e4bc3.mount: Deactivated successfully. May 17 00:43:22.900744 systemd[1]: var-lib-kubelet-pods-3e13e7b4\x2d97cd\x2d418e\x2d853c\x2db1f9053a0bd8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77b2b.mount: Deactivated successfully. May 17 00:43:22.900977 systemd[1]: var-lib-kubelet-pods-3e13e7b4\x2d97cd\x2d418e\x2d853c\x2db1f9053a0bd8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:43:22.924125 kubelet[2220]: I0517 00:43:22.924066 2220 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-ca-bundle\") on node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" DevicePath \"\"" May 17 00:43:22.924125 kubelet[2220]: I0517 00:43:22.924124 2220 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-whisker-backend-key-pair\") on node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" DevicePath \"\"" May 17 00:43:22.924396 kubelet[2220]: I0517 00:43:22.924149 2220 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77b2b\" (UniqueName: \"kubernetes.io/projected/3e13e7b4-97cd-418e-853c-b1f9053a0bd8-kube-api-access-77b2b\") on node \"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260\" DevicePath \"\"" May 17 00:43:23.455468 systemd[1]: run-containerd-runc-k8s.io-4a8846f4a61e0929895aa5693269ce1a469c915109549821327692773f69d840-runc.w0JiUb.mount: Deactivated successfully. May 17 00:43:23.639093 kubelet[2220]: I0517 00:43:23.639025 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b432bbba-2503-4442-8976-021c08969eff-whisker-backend-key-pair\") pod \"whisker-59cd79cdc-md8tk\" (UID: \"b432bbba-2503-4442-8976-021c08969eff\") " pod="calico-system/whisker-59cd79cdc-md8tk" May 17 00:43:23.639093 kubelet[2220]: I0517 00:43:23.639101 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4w8m\" (UniqueName: \"kubernetes.io/projected/b432bbba-2503-4442-8976-021c08969eff-kube-api-access-f4w8m\") pod \"whisker-59cd79cdc-md8tk\" (UID: \"b432bbba-2503-4442-8976-021c08969eff\") " pod="calico-system/whisker-59cd79cdc-md8tk" May 17 00:43:23.640115 kubelet[2220]: I0517 00:43:23.639137 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b432bbba-2503-4442-8976-021c08969eff-whisker-ca-bundle\") pod \"whisker-59cd79cdc-md8tk\" (UID: \"b432bbba-2503-4442-8976-021c08969eff\") " pod="calico-system/whisker-59cd79cdc-md8tk" May 17 00:43:23.828684 env[1335]: time="2025-05-17T00:43:23.828561784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cd79cdc-md8tk,Uid:b432bbba-2503-4442-8976-021c08969eff,Namespace:calico-system,Attempt:0,}" May 17 00:43:24.070000 audit[3644]: AVC avc: denied { write } for pid=3644 comm="tee" name="fd" dev="proc" ino=24968 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.093473 kernel: audit: type=1400 audit(1747442604.070:286): avc: denied { write } for pid=3644 comm="tee" name="fd" dev="proc" ino=24968 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.070000 audit[3644]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4882e788 a2=241 a3=1b6 items=1 ppid=3592 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.130749 kernel: audit: type=1300 audit(1747442604.070:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4882e788 a2=241 a3=1b6 items=1 ppid=3592 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.070000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:43:24.181796 kernel: audit: type=1307 audit(1747442604.070:286): cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:43:24.181964 kernel: audit: type=1302 audit(1747442604.070:286): item=0 name="/dev/fd/63" inode=24962 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.070000 audit: PATH item=0 name="/dev/fd/63" inode=24962 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.226808 kernel: audit: type=1327 audit(1747442604.070:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.227010 kernel: audit: type=1400 audit(1747442604.129:287): avc: denied { write } for pid=3641 comm="tee" name="fd" dev="proc" ino=24976 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.129000 audit[3641]: AVC avc: denied { write } for pid=3641 comm="tee" name="fd" dev="proc" ino=24976 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.129000 audit[3641]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffee287d797 a2=241 a3=1b6 items=1 ppid=3596 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.259695 kernel: audit: type=1300 audit(1747442604.129:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffee287d797 a2=241 a3=1b6 items=1 ppid=3596 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.129000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 00:43:24.291152 kernel: audit: type=1307 audit(1747442604.129:287): cwd="/etc/service/enabled/bird6/log" May 17 00:43:24.291326 kernel: audit: type=1302 audit(1747442604.129:287): item=0 name="/dev/fd/63" inode=24961 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.129000 audit: PATH item=0 name="/dev/fd/63" inode=24961 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.129000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.310613 kernel: audit: type=1327 audit(1747442604.129:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.208000 audit[3651]: AVC avc: denied { write } for pid=3651 comm="tee" name="fd" dev="proc" ino=24982 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.208000 audit[3651]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1febb798 a2=241 a3=1b6 items=1 ppid=3594 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.208000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 00:43:24.208000 audit: PATH item=0 name="/dev/fd/63" inode=25671 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.222000 audit[3657]: AVC avc: denied { write } for pid=3657 comm="tee" name="fd" dev="proc" ino=24986 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.222000 audit[3657]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff24c25787 a2=241 a3=1b6 items=1 ppid=3589 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.222000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 00:43:24.222000 audit: PATH item=0 name="/dev/fd/63" inode=24973 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.313000 audit[3654]: AVC avc: denied { write } for pid=3654 comm="tee" name="fd" dev="proc" ino=25685 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.316000 audit[3649]: AVC avc: denied { write } for pid=3649 comm="tee" name="fd" dev="proc" ino=25691 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.316000 audit[3649]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd04a6d799 a2=241 a3=1b6 items=1 ppid=3587 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.316000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 00:43:24.316000 audit: PATH item=0 name="/dev/fd/63" inode=25668 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.316000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.313000 audit[3654]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbfecc797 a2=241 a3=1b6 items=1 ppid=3597 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.313000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 00:43:24.313000 audit: PATH item=0 name="/dev/fd/63" inode=24972 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.320000 audit[3669]: AVC avc: denied { write } for pid=3669 comm="tee" name="fd" dev="proc" ino=24991 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:43:24.320000 audit[3669]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdf1448797 a2=241 a3=1b6 items=1 ppid=3600 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.320000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 00:43:24.320000 audit: PATH item=0 name="/dev/fd/63" inode=25688 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:43:24.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:43:24.325914 systemd-networkd[1075]: cali6204abc943e: Link UP May 17 00:43:24.343569 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:43:24.343752 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6204abc943e: link becomes ready May 17 00:43:24.358765 systemd-networkd[1075]: cali6204abc943e: Gained carrier May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:23.874 [INFO][3569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:23.925 [INFO][3569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0 whisker-59cd79cdc- calico-system b432bbba-2503-4442-8976-021c08969eff 886 0 2025-05-17 00:43:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59cd79cdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 whisker-59cd79cdc-md8tk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6204abc943e [] [] }} ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:23.925 [INFO][3569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.040 [INFO][3609] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" HandleID="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.041 [INFO][3609] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" HandleID="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"whisker-59cd79cdc-md8tk", "timestamp":"2025-05-17 00:43:24.035368412 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.041 [INFO][3609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.041 [INFO][3609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.041 [INFO][3609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.053 [INFO][3609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.059 [INFO][3609] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.078 [INFO][3609] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.095 [INFO][3609] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.100 [INFO][3609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.100 [INFO][3609] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.146 [INFO][3609] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993 May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.163 [INFO][3609] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.177 [INFO][3609] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.193/26] block=192.168.93.192/26 handle="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.177 [INFO][3609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.193/26] handle="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.177 [INFO][3609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:24.404413 env[1335]: 2025-05-17 00:43:24.178 [INFO][3609] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.193/26] IPv6=[] ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" HandleID="k8s-pod-network.4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.405916 env[1335]: 2025-05-17 00:43:24.184 [INFO][3569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0", GenerateName:"whisker-59cd79cdc-", Namespace:"calico-system", SelfLink:"", UID:"b432bbba-2503-4442-8976-021c08969eff", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cd79cdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"whisker-59cd79cdc-md8tk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6204abc943e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:24.405916 env[1335]: 2025-05-17 00:43:24.184 [INFO][3569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.193/32] ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.405916 env[1335]: 2025-05-17 00:43:24.185 [INFO][3569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6204abc943e ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.405916 env[1335]: 2025-05-17 00:43:24.355 [INFO][3569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.405916 env[1335]: 2025-05-17 00:43:24.356 [INFO][3569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0", GenerateName:"whisker-59cd79cdc-", Namespace:"calico-system", SelfLink:"", UID:"b432bbba-2503-4442-8976-021c08969eff", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59cd79cdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993", Pod:"whisker-59cd79cdc-md8tk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.93.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6204abc943e", MAC:"3a:88:4c:58:91:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:24.405916 env[1335]: 2025-05-17 00:43:24.394 [INFO][3569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993" Namespace="calico-system" Pod="whisker-59cd79cdc-md8tk" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--59cd79cdc--md8tk-eth0" May 17 00:43:24.448028 env[1335]: time="2025-05-17T00:43:24.447898026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:24.448238 env[1335]: time="2025-05-17T00:43:24.448052088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:24.448238 env[1335]: time="2025-05-17T00:43:24.448100714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:24.448453 env[1335]: time="2025-05-17T00:43:24.448366976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993 pid=3687 runtime=io.containerd.runc.v2 May 17 00:43:24.531986 systemd[1]: run-containerd-runc-k8s.io-4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993-runc.UfIwog.mount: Deactivated successfully. May 17 00:43:24.695303 env[1335]: time="2025-05-17T00:43:24.695129352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59cd79cdc-md8tk,Uid:b432bbba-2503-4442-8976-021c08969eff,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c19cdc7372bb41cea59d5121b1a5e8c61b4f91bcc99e38b56f7127b454b8993\"" May 17 00:43:24.700292 env[1335]: time="2025-05-17T00:43:24.700243112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:43:24.822462 env[1335]: time="2025-05-17T00:43:24.820855513Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:24.822669 env[1335]: time="2025-05-17T00:43:24.822395432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:24.822835 kubelet[2220]: E0517 00:43:24.822775 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:24.823416 kubelet[2220]: E0517 00:43:24.822856 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:24.823539 kubelet[2220]: E0517 00:43:24.823064 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247aae74375a41939121aaa0f0cbe7f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:24.827099 env[1335]: time="2025-05-17T00:43:24.827045733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:43:24.840000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.840000 audit: BPF prog-id=10 op=LOAD May 17 00:43:24.840000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd72fbf50 a2=98 a3=3 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.840000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:24.841000 audit: BPF prog-id=10 op=UNLOAD May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit: BPF prog-id=11 op=LOAD May 17 00:43:24.841000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd72fbd40 a2=94 a3=54428f items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.841000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:24.841000 audit: BPF prog-id=11 op=UNLOAD May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:24.841000 audit: BPF prog-id=12 op=LOAD May 17 00:43:24.841000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd72fbd70 a2=94 a3=2 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:24.841000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:24.841000 audit: BPF prog-id=12 op=UNLOAD May 17 00:43:24.936111 kubelet[2220]: I0517 00:43:24.936054 2220 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e13e7b4-97cd-418e-853c-b1f9053a0bd8" path="/var/lib/kubelet/pods/3e13e7b4-97cd-418e-853c-b1f9053a0bd8/volumes" May 17 00:43:24.954669 env[1335]: time="2025-05-17T00:43:24.954497506Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:24.957788 env[1335]: time="2025-05-17T00:43:24.957694249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:24.958116 kubelet[2220]: E0517 00:43:24.958067 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:24.958256 kubelet[2220]: E0517 00:43:24.958135 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:24.958352 kubelet[2220]: E0517 00:43:24.958286 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:24.959838 kubelet[2220]: E0517 00:43:24.959777 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:43:25.033000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit: BPF prog-id=13 op=LOAD May 17 00:43:25.033000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd72fbc30 a2=94 a3=1 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.033000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.033000 audit: BPF prog-id=13 op=UNLOAD May 17 00:43:25.033000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.033000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffd72fbd00 a2=50 a3=7fffd72fbde0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.033000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd72fbc40 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd72fbc70 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd72fbb80 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd72fbc90 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd72fbc70 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd72fbc60 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd72fbc90 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd72fbc70 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.046000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.046000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd72fbc90 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.047000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.047000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffd72fbc60 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.047000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.047000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffd72fbcd0 a2=28 a3=0 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffd72fba80 a2=50 a3=1 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit: BPF prog-id=14 op=LOAD May 17 00:43:25.048000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffd72fba80 a2=94 a3=5 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.048000 audit: BPF prog-id=14 op=UNLOAD May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffd72fbb30 a2=50 a3=1 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffd72fbc50 a2=4 a3=38 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { confidentiality } for pid=3734 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:43:25.048000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffd72fbca0 a2=94 a3=6 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.048000 audit[3734]: AVC avc: denied { confidentiality } for pid=3734 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:43:25.048000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffd72fb450 a2=94 a3=88 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.049000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.049000 audit[3734]: AVC avc: denied { bpf } for pid=3734 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.049000 audit[3734]: AVC avc: denied { perfmon } for pid=3734 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.049000 audit[3734]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffd72fb450 a2=94 a3=88 items=0 ppid=3606 pid=3734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.049000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit: BPF prog-id=15 op=LOAD May 17 00:43:25.063000 audit[3757]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf1678340 a2=98 a3=1999999999999999 items=0 ppid=3606 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:43:25.063000 audit: BPF prog-id=15 op=UNLOAD May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit: BPF prog-id=16 op=LOAD May 17 00:43:25.063000 audit[3757]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf1678220 a2=94 a3=ffff items=0 ppid=3606 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:43:25.063000 audit: BPF prog-id=16 op=UNLOAD May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { perfmon } for pid=3757 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit[3757]: AVC avc: denied { bpf } for pid=3757 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.063000 audit: BPF prog-id=17 op=LOAD May 17 00:43:25.063000 audit[3757]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf1678260 a2=94 a3=7ffcf1678440 items=0 ppid=3606 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.063000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:43:25.063000 audit: BPF prog-id=17 op=UNLOAD May 17 00:43:25.196180 systemd-networkd[1075]: vxlan.calico: Link UP May 17 00:43:25.196194 systemd-networkd[1075]: vxlan.calico: Gained carrier May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit: BPF prog-id=18 op=LOAD May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec7285d70 a2=98 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit: BPF prog-id=18 op=UNLOAD May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit: BPF prog-id=19 op=LOAD May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec7285b80 a2=94 a3=54428f items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit: BPF prog-id=19 op=UNLOAD May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit: BPF prog-id=20 op=LOAD May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec7285bb0 a2=94 a3=2 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit: BPF prog-id=20 op=UNLOAD May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec7285a80 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec7285ab0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec72859c0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec7285ad0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.226000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.226000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec7285ab0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec7285aa0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec7285ad0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec7285ab0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec7285ad0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec7285aa0 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec7285b10 a2=28 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.227000 audit: BPF prog-id=21 op=LOAD May 17 00:43:25.227000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec7285980 a2=94 a3=0 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.227000 audit: BPF prog-id=21 op=UNLOAD May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffec7285970 a2=50 a3=2800 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffec7285970 a2=50 a3=2800 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit: BPF prog-id=22 op=LOAD May 17 00:43:25.229000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec7285190 a2=94 a3=2 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.229000 audit: BPF prog-id=22 op=UNLOAD May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { perfmon } for pid=3782 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit[3782]: AVC avc: denied { bpf } for pid=3782 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.229000 audit: BPF prog-id=23 op=LOAD May 17 00:43:25.229000 audit[3782]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec7285290 a2=94 a3=30 items=0 ppid=3606 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.236000 audit: BPF prog-id=24 op=LOAD May 17 00:43:25.236000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8dda890 a2=98 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.236000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.236000 audit: BPF prog-id=24 op=UNLOAD May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit: BPF prog-id=25 op=LOAD May 17 00:43:25.237000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca8dda680 a2=94 a3=54428f items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.237000 audit: BPF prog-id=25 op=UNLOAD May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.237000 audit: BPF prog-id=26 op=LOAD May 17 00:43:25.237000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca8dda6b0 a2=94 a3=2 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.237000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.237000 audit: BPF prog-id=26 op=UNLOAD May 17 00:43:25.373737 systemd-networkd[1075]: cali6204abc943e: Gained IPv6LL May 17 00:43:25.420327 kubelet[2220]: E0517 00:43:25.420270 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:43:25.438000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.438000 audit: BPF prog-id=27 op=LOAD May 17 00:43:25.438000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffca8dda570 a2=94 a3=1 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.440000 audit: BPF prog-id=27 op=UNLOAD May 17 00:43:25.440000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.440000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffca8dda640 a2=50 a3=7ffca8dda720 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.468000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.468000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffca8dda580 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.468000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca8dda5b0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca8dda4c0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffca8dda5d0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffca8dda5b0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffca8dda5a0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffca8dda5d0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca8dda5b0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca8dda5d0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffca8dda5a0 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffca8dda610 a2=28 a3=0 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffca8dda3c0 a2=50 a3=1 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.472000 audit: BPF prog-id=28 op=LOAD May 17 00:43:25.472000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffca8dda3c0 a2=94 a3=5 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.472000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.473000 audit: BPF prog-id=28 op=UNLOAD May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffca8dda470 a2=50 a3=1 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffca8dda590 a2=4 a3=38 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.473000 audit[3785]: AVC avc: denied { confidentiality } for pid=3785 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:43:25.473000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffca8dda5e0 a2=94 a3=6 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.473000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.474000 audit[3785]: AVC avc: denied { confidentiality } for pid=3785 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:43:25.474000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffca8dd9d90 a2=94 a3=88 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.474000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { confidentiality } for pid=3785 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:43:25.475000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffca8dd9d90 a2=94 a3=88 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.475000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffca8ddb7c0 a2=10 a3=f8f00800 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.475000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffca8ddb660 a2=10 a3=3 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.475000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffca8ddb600 a2=10 a3=3 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.475000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.475000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:43:25.475000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffca8ddb600 a2=10 a3=7 items=0 ppid=3606 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.475000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:43:25.480000 audit[3790]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=3790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:25.480000 audit[3790]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc209ed70 a2=0 a3=7ffcc209ed5c items=0 ppid=2367 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:25.489000 audit[3790]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=3790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:25.489000 audit[3790]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc209ed70 a2=0 a3=0 items=0 ppid=2367 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:25.490000 audit: BPF prog-id=23 op=UNLOAD May 17 00:43:25.604000 audit[3814]: NETFILTER_CFG table=mangle:103 family=2 entries=16 op=nft_register_chain pid=3814 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:25.604000 audit[3814]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe034ad000 a2=0 a3=7ffe034acfec items=0 ppid=3606 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.604000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:25.620000 audit[3813]: NETFILTER_CFG table=nat:104 family=2 entries=15 op=nft_register_chain pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:25.620000 audit[3813]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd595c4250 a2=0 a3=7ffd595c423c items=0 ppid=3606 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.620000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:25.632000 audit[3812]: NETFILTER_CFG table=raw:105 family=2 entries=21 op=nft_register_chain pid=3812 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:25.632000 audit[3812]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd3a3d7f60 a2=0 a3=7ffd3a3d7f4c items=0 ppid=3606 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.632000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:25.637000 audit[3817]: NETFILTER_CFG table=filter:106 family=2 entries=94 op=nft_register_chain pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:25.637000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe01fef500 a2=0 a3=7ffe01fef4ec items=0 ppid=3606 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:25.637000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:25.936449 env[1335]: time="2025-05-17T00:43:25.933955493Z" level=info msg="StopPodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\"" May 17 00:43:25.936449 env[1335]: time="2025-05-17T00:43:25.934314489Z" level=info msg="StopPodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\"" May 17 00:43:25.936449 env[1335]: time="2025-05-17T00:43:25.934347985Z" level=info msg="StopPodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\"" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.085 [INFO][3857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.089 [INFO][3857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" iface="eth0" netns="/var/run/netns/cni-f0ac59dd-d0f9-f73f-06b8-d6ad7a54a541" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.089 [INFO][3857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" iface="eth0" netns="/var/run/netns/cni-f0ac59dd-d0f9-f73f-06b8-d6ad7a54a541" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.089 [INFO][3857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" iface="eth0" netns="/var/run/netns/cni-f0ac59dd-d0f9-f73f-06b8-d6ad7a54a541" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.089 [INFO][3857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.089 [INFO][3857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.150 [INFO][3882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.151 [INFO][3882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.151 [INFO][3882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.163 [WARNING][3882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.163 [INFO][3882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.165 [INFO][3882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:26.178221 env[1335]: 2025-05-17 00:43:26.168 [INFO][3857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:26.178221 env[1335]: time="2025-05-17T00:43:26.176877624Z" level=info msg="TearDown network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" successfully" May 17 00:43:26.178221 env[1335]: time="2025-05-17T00:43:26.176948425Z" level=info msg="StopPodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" returns successfully" May 17 00:43:26.176868 systemd[1]: run-netns-cni\x2df0ac59dd\x2dd0f9\x2df73f\x2d06b8\x2dd6ad7a54a541.mount: Deactivated successfully. May 17 00:43:26.180351 env[1335]: time="2025-05-17T00:43:26.179622829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-h6zp7,Uid:05a2ff53-9d59-4c60-9b54-47fa2348be26,Namespace:calico-apiserver,Attempt:1,}" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.066 [INFO][3845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.066 [INFO][3845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" iface="eth0" netns="/var/run/netns/cni-4858f5fe-bfcc-0bc7-8709-18dcde1165c2" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.067 [INFO][3845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" iface="eth0" netns="/var/run/netns/cni-4858f5fe-bfcc-0bc7-8709-18dcde1165c2" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.067 [INFO][3845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" iface="eth0" netns="/var/run/netns/cni-4858f5fe-bfcc-0bc7-8709-18dcde1165c2" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.067 [INFO][3845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.067 [INFO][3845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.235 [INFO][3877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.237 [INFO][3877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.250 [INFO][3877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.261 [WARNING][3877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.261 [INFO][3877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.264 [INFO][3877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:26.277799 env[1335]: 2025-05-17 00:43:26.267 [INFO][3845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:26.277799 env[1335]: time="2025-05-17T00:43:26.276059634Z" level=info msg="TearDown network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" successfully" May 17 00:43:26.277799 env[1335]: time="2025-05-17T00:43:26.276119613Z" level=info msg="StopPodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" returns successfully" May 17 00:43:26.277799 env[1335]: time="2025-05-17T00:43:26.277161124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-xp6vq,Uid:9ab41193-998e-4949-bcb0-dcdfdf7aa08f,Namespace:calico-apiserver,Attempt:1,}" May 17 00:43:26.277029 systemd[1]: run-netns-cni\x2d4858f5fe\x2dbfcc\x2d0bc7\x2d8709\x2d18dcde1165c2.mount: Deactivated successfully. May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.093 [INFO][3862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.094 [INFO][3862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" iface="eth0" netns="/var/run/netns/cni-b934852d-9e4d-abf7-91e0-08d503cde0a3" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.095 [INFO][3862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" iface="eth0" netns="/var/run/netns/cni-b934852d-9e4d-abf7-91e0-08d503cde0a3" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.095 [INFO][3862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" iface="eth0" netns="/var/run/netns/cni-b934852d-9e4d-abf7-91e0-08d503cde0a3" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.095 [INFO][3862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.095 [INFO][3862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.235 [INFO][3885] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.236 [INFO][3885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.236 [INFO][3885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.248 [WARNING][3885] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.248 [INFO][3885] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.250 [INFO][3885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:26.279320 env[1335]: 2025-05-17 00:43:26.260 [INFO][3862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:26.285284 env[1335]: time="2025-05-17T00:43:26.284986278Z" level=info msg="TearDown network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" successfully" May 17 00:43:26.285284 env[1335]: time="2025-05-17T00:43:26.285040527Z" level=info msg="StopPodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" returns successfully" May 17 00:43:26.285998 env[1335]: time="2025-05-17T00:43:26.285949992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wllmd,Uid:2b0f4094-a401-446d-a3d3-fefd3d968c34,Namespace:kube-system,Attempt:1,}" May 17 00:43:26.292514 systemd[1]: run-netns-cni\x2db934852d\x2d9e4d\x2dabf7\x2d91e0\x2d08d503cde0a3.mount: Deactivated successfully. May 17 00:43:26.437749 kubelet[2220]: E0517 00:43:26.437699 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:43:26.578612 systemd-networkd[1075]: calic04c894dbf9: Link UP May 17 00:43:26.596268 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:43:26.596404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic04c894dbf9: link becomes ready May 17 00:43:26.598165 systemd-networkd[1075]: calic04c894dbf9: Gained carrier May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.368 [INFO][3897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0 calico-apiserver-bf9b9cc9- calico-apiserver 05a2ff53-9d59-4c60-9b54-47fa2348be26 912 0 2025-05-17 00:42:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bf9b9cc9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 calico-apiserver-bf9b9cc9-h6zp7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic04c894dbf9 [] [] }} ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.369 [INFO][3897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.505 [INFO][3931] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" HandleID="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.506 [INFO][3931] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" HandleID="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d16b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"calico-apiserver-bf9b9cc9-h6zp7", "timestamp":"2025-05-17 00:43:26.505323057 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.506 [INFO][3931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.506 [INFO][3931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.506 [INFO][3931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.517 [INFO][3931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.524 [INFO][3931] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.531 [INFO][3931] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.534 [INFO][3931] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.542 [INFO][3931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.542 [INFO][3931] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.544 [INFO][3931] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733 May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.550 [INFO][3931] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.563 [INFO][3931] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.194/26] block=192.168.93.192/26 handle="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.563 [INFO][3931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.194/26] handle="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.563 [INFO][3931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:26.621981 env[1335]: 2025-05-17 00:43:26.564 [INFO][3931] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.194/26] IPv6=[] ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" HandleID="k8s-pod-network.9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.623443 env[1335]: 2025-05-17 00:43:26.566 [INFO][3897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a2ff53-9d59-4c60-9b54-47fa2348be26", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"calico-apiserver-bf9b9cc9-h6zp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic04c894dbf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:26.623443 env[1335]: 2025-05-17 00:43:26.567 [INFO][3897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.194/32] ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.623443 env[1335]: 2025-05-17 00:43:26.567 [INFO][3897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic04c894dbf9 ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.623443 env[1335]: 2025-05-17 00:43:26.597 [INFO][3897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.623443 env[1335]: 2025-05-17 00:43:26.601 [INFO][3897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a2ff53-9d59-4c60-9b54-47fa2348be26", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733", Pod:"calico-apiserver-bf9b9cc9-h6zp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic04c894dbf9", MAC:"ce:13:aa:79:c1:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:26.623443 env[1335]: 2025-05-17 00:43:26.616 [INFO][3897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-h6zp7" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:26.671000 audit[3969]: NETFILTER_CFG table=filter:107 family=2 entries=50 op=nft_register_chain pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:26.671000 audit[3969]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fffb2cabd40 a2=0 a3=7fffb2cabd2c items=0 ppid=3606 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:26.671000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:26.681786 env[1335]: time="2025-05-17T00:43:26.681583792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:26.682312 env[1335]: time="2025-05-17T00:43:26.682205867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:26.682750 env[1335]: time="2025-05-17T00:43:26.682243568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:26.683584 env[1335]: time="2025-05-17T00:43:26.683494417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733 pid=3972 runtime=io.containerd.runc.v2 May 17 00:43:26.726111 systemd-networkd[1075]: cali592dc6f6dbf: Link UP May 17 00:43:26.737467 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali592dc6f6dbf: link becomes ready May 17 00:43:26.737960 systemd-networkd[1075]: cali592dc6f6dbf: Gained carrier May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.411 [INFO][3909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0 coredns-7c65d6cfc9- kube-system 2b0f4094-a401-446d-a3d3-fefd3d968c34 913 0 2025-05-17 00:42:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 coredns-7c65d6cfc9-wllmd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali592dc6f6dbf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.412 [INFO][3909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.555 [INFO][3940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" HandleID="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.555 [INFO][3940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" HandleID="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d1630), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"coredns-7c65d6cfc9-wllmd", "timestamp":"2025-05-17 00:43:26.5553813 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.555 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.566 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.566 [INFO][3940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.628 [INFO][3940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.637 [INFO][3940] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.647 [INFO][3940] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.651 [INFO][3940] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.657 [INFO][3940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.658 [INFO][3940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.660 [INFO][3940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571 May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.679 [INFO][3940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.705 [INFO][3940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.195/26] block=192.168.93.192/26 handle="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.705 [INFO][3940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.195/26] handle="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.706 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:26.769919 env[1335]: 2025-05-17 00:43:26.706 [INFO][3940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.195/26] IPv6=[] ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" HandleID="k8s-pod-network.6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.772453 env[1335]: 2025-05-17 00:43:26.711 [INFO][3909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b0f4094-a401-446d-a3d3-fefd3d968c34", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"coredns-7c65d6cfc9-wllmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali592dc6f6dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:26.772453 env[1335]: 2025-05-17 00:43:26.711 [INFO][3909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.195/32] ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.772453 env[1335]: 2025-05-17 00:43:26.712 [INFO][3909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali592dc6f6dbf ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.772453 env[1335]: 2025-05-17 00:43:26.743 [INFO][3909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.772453 env[1335]: 2025-05-17 00:43:26.747 [INFO][3909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b0f4094-a401-446d-a3d3-fefd3d968c34", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571", Pod:"coredns-7c65d6cfc9-wllmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali592dc6f6dbf", MAC:"d6:3b:c2:54:6c:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:26.772453 env[1335]: 2025-05-17 00:43:26.766 [INFO][3909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wllmd" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:26.810920 systemd-networkd[1075]: calif703cbe5bcf: Link UP May 17 00:43:26.819537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif703cbe5bcf: link becomes ready May 17 00:43:26.820634 systemd-networkd[1075]: calif703cbe5bcf: Gained carrier May 17 00:43:26.851587 env[1335]: time="2025-05-17T00:43:26.851280707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:26.851587 env[1335]: time="2025-05-17T00:43:26.851396934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:26.858578 env[1335]: time="2025-05-17T00:43:26.851503715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.477 [INFO][3908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0 calico-apiserver-bf9b9cc9- calico-apiserver 9ab41193-998e-4949-bcb0-dcdfdf7aa08f 911 0 2025-05-17 00:42:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bf9b9cc9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 calico-apiserver-bf9b9cc9-xp6vq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif703cbe5bcf [] [] }} ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.477 [INFO][3908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.711 [INFO][3946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" HandleID="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.712 [INFO][3946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" HandleID="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"calico-apiserver-bf9b9cc9-xp6vq", "timestamp":"2025-05-17 00:43:26.711758782 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.712 [INFO][3946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.712 [INFO][3946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.712 [INFO][3946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.726 [INFO][3946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.758 [INFO][3946] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.768 [INFO][3946] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.772 [INFO][3946] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.775 [INFO][3946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.775 [INFO][3946] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.777 [INFO][3946] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.784 [INFO][3946] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.803 [INFO][3946] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.196/26] block=192.168.93.192/26 handle="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.803 [INFO][3946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.196/26] handle="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.803 [INFO][3946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:26.861712 env[1335]: 2025-05-17 00:43:26.803 [INFO][3946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.196/26] IPv6=[] ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" HandleID="k8s-pod-network.7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.863374 env[1335]: 2025-05-17 00:43:26.806 [INFO][3908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ab41193-998e-4949-bcb0-dcdfdf7aa08f", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"calico-apiserver-bf9b9cc9-xp6vq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif703cbe5bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:26.863374 env[1335]: 2025-05-17 00:43:26.807 [INFO][3908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.196/32] ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.863374 env[1335]: 2025-05-17 00:43:26.807 [INFO][3908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif703cbe5bcf ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.863374 env[1335]: 2025-05-17 00:43:26.827 [INFO][3908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.863374 env[1335]: 2025-05-17 00:43:26.830 [INFO][3908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ab41193-998e-4949-bcb0-dcdfdf7aa08f", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a", Pod:"calico-apiserver-bf9b9cc9-xp6vq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif703cbe5bcf", MAC:"56:a2:db:17:27:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:26.863374 env[1335]: 2025-05-17 00:43:26.848 [INFO][3908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a" Namespace="calico-apiserver" Pod="calico-apiserver-bf9b9cc9-xp6vq" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:26.864467 env[1335]: time="2025-05-17T00:43:26.861901462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571 pid=4006 runtime=io.containerd.runc.v2 May 17 00:43:26.910873 systemd-networkd[1075]: vxlan.calico: Gained IPv6LL May 17 00:43:26.921729 env[1335]: time="2025-05-17T00:43:26.921600475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:26.922074 env[1335]: time="2025-05-17T00:43:26.921996159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:26.922289 env[1335]: time="2025-05-17T00:43:26.922235559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:26.922810 env[1335]: time="2025-05-17T00:43:26.922739946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a pid=4046 runtime=io.containerd.runc.v2 May 17 00:43:26.935933 env[1335]: time="2025-05-17T00:43:26.935885313Z" level=info msg="StopPodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\"" May 17 00:43:26.945087 env[1335]: time="2025-05-17T00:43:26.943133091Z" level=info msg="StopPodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\"" May 17 00:43:27.013000 audit[4067]: NETFILTER_CFG table=filter:108 family=2 entries=46 op=nft_register_chain pid=4067 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:27.013000 audit[4067]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffca008e710 a2=0 a3=7ffca008e6fc items=0 ppid=3606 pid=4067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.013000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:27.057000 audit[4111]: NETFILTER_CFG table=filter:109 family=2 entries=45 op=nft_register_chain pid=4111 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:27.057000 audit[4111]: SYSCALL arch=c000003e syscall=46 success=yes exit=24264 a0=3 a1=7fff146897b0 a2=0 a3=7fff1468979c items=0 ppid=3606 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.057000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:27.095460 env[1335]: time="2025-05-17T00:43:27.091467452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wllmd,Uid:2b0f4094-a401-446d-a3d3-fefd3d968c34,Namespace:kube-system,Attempt:1,} returns sandbox id \"6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571\"" May 17 00:43:27.110240 env[1335]: time="2025-05-17T00:43:27.106685492Z" level=info msg="CreateContainer within sandbox \"6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:43:27.133130 env[1335]: time="2025-05-17T00:43:27.133075476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-h6zp7,Uid:05a2ff53-9d59-4c60-9b54-47fa2348be26,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733\"" May 17 00:43:27.137273 env[1335]: time="2025-05-17T00:43:27.137188794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:43:27.153946 env[1335]: time="2025-05-17T00:43:27.153884279Z" level=info msg="CreateContainer within sandbox \"6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a8250b48ab56ad97da688d25b1e8a10497127165a603d86f587684c7c324fc5\"" May 17 00:43:27.155514 env[1335]: time="2025-05-17T00:43:27.155470352Z" level=info msg="StartContainer for \"4a8250b48ab56ad97da688d25b1e8a10497127165a603d86f587684c7c324fc5\"" May 17 00:43:27.217449 env[1335]: time="2025-05-17T00:43:27.217375438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf9b9cc9-xp6vq,Uid:9ab41193-998e-4949-bcb0-dcdfdf7aa08f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a\"" May 17 00:43:27.400497 env[1335]: time="2025-05-17T00:43:27.397822733Z" level=info msg="StartContainer for \"4a8250b48ab56ad97da688d25b1e8a10497127165a603d86f587684c7c324fc5\" returns successfully" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.250 [INFO][4091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.250 [INFO][4091] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" iface="eth0" netns="/var/run/netns/cni-7a7d7b8e-9d31-0e1f-fbdc-f98f9481b480" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.251 [INFO][4091] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" iface="eth0" netns="/var/run/netns/cni-7a7d7b8e-9d31-0e1f-fbdc-f98f9481b480" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.251 [INFO][4091] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" iface="eth0" netns="/var/run/netns/cni-7a7d7b8e-9d31-0e1f-fbdc-f98f9481b480" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.251 [INFO][4091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.251 [INFO][4091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.388 [INFO][4149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.389 [INFO][4149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.389 [INFO][4149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.411 [WARNING][4149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.412 [INFO][4149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.420 [INFO][4149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:27.429763 env[1335]: 2025-05-17 00:43:27.426 [INFO][4091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:27.443035 systemd[1]: run-netns-cni\x2d7a7d7b8e\x2d9d31\x2d0e1f\x2dfbdc\x2df98f9481b480.mount: Deactivated successfully. May 17 00:43:27.445501 env[1335]: time="2025-05-17T00:43:27.445392542Z" level=info msg="TearDown network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" successfully" May 17 00:43:27.445738 env[1335]: time="2025-05-17T00:43:27.445700710Z" level=info msg="StopPodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" returns successfully" May 17 00:43:27.447110 env[1335]: time="2025-05-17T00:43:27.447068266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-4qrzt,Uid:67d969c1-4d93-44a9-a00c-87eca6fdadfb,Namespace:calico-system,Attempt:1,}" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.294 [INFO][4093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.295 [INFO][4093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" iface="eth0" netns="/var/run/netns/cni-d66c047c-3be4-f1e3-efe7-9505c3337a75" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.311 [INFO][4093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" iface="eth0" netns="/var/run/netns/cni-d66c047c-3be4-f1e3-efe7-9505c3337a75" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.312 [INFO][4093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" iface="eth0" netns="/var/run/netns/cni-d66c047c-3be4-f1e3-efe7-9505c3337a75" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.312 [INFO][4093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.312 [INFO][4093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.492 [INFO][4170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.494 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.494 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.532 [WARNING][4170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.532 [INFO][4170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.541 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:27.546194 env[1335]: 2025-05-17 00:43:27.543 [INFO][4093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:27.547716 env[1335]: time="2025-05-17T00:43:27.547660444Z" level=info msg="TearDown network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" successfully" May 17 00:43:27.547903 env[1335]: time="2025-05-17T00:43:27.547866927Z" level=info msg="StopPodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" returns successfully" May 17 00:43:27.553368 env[1335]: time="2025-05-17T00:43:27.553320655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7cfb968-xnd89,Uid:877258a8-70e4-4a88-a629-d7c04d184c1d,Namespace:calico-system,Attempt:1,}" May 17 00:43:27.581000 audit[4202]: NETFILTER_CFG table=filter:110 family=2 entries=20 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:27.581000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0025d9c0 a2=0 a3=7ffe0025d9ac items=0 ppid=2367 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:27.599000 audit[4202]: NETFILTER_CFG table=nat:111 family=2 entries=14 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:27.599000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe0025d9c0 a2=0 a3=0 items=0 ppid=2367 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:27.840505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:43:27.850503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3ddd5bd048d: link becomes ready May 17 00:43:27.856574 systemd-networkd[1075]: cali3ddd5bd048d: Link UP May 17 00:43:27.859055 systemd-networkd[1075]: cali3ddd5bd048d: Gained carrier May 17 00:43:27.890132 kubelet[2220]: I0517 00:43:27.888974 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wllmd" podStartSLOduration=41.888942783 podStartE2EDuration="41.888942783s" podCreationTimestamp="2025-05-17 00:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:27.500832026 +0000 UTC m=+46.785161556" watchObservedRunningTime="2025-05-17 00:43:27.888942783 +0000 UTC m=+47.173272301" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.696 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0 goldmane-8f77d7b6c- calico-system 67d969c1-4d93-44a9-a00c-87eca6fdadfb 936 0 2025-05-17 00:43:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 goldmane-8f77d7b6c-4qrzt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3ddd5bd048d [] [] }} ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.696 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.765 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" HandleID="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.765 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" HandleID="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000241730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"goldmane-8f77d7b6c-4qrzt", "timestamp":"2025-05-17 00:43:27.765249283 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.765 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.765 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.766 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.780 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.786 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.792 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.795 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.798 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.798 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.800 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.807 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.818 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.197/26] block=192.168.93.192/26 handle="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.819 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.197/26] handle="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.819 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:27.897320 env[1335]: 2025-05-17 00:43:27.820 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.197/26] IPv6=[] ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" HandleID="k8s-pod-network.f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.898806 env[1335]: 2025-05-17 00:43:27.823 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"67d969c1-4d93-44a9-a00c-87eca6fdadfb", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"goldmane-8f77d7b6c-4qrzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ddd5bd048d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:27.898806 env[1335]: 2025-05-17 00:43:27.823 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.197/32] ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.898806 env[1335]: 2025-05-17 00:43:27.823 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ddd5bd048d ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.898806 env[1335]: 2025-05-17 00:43:27.867 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.898806 env[1335]: 2025-05-17 00:43:27.868 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"67d969c1-4d93-44a9-a00c-87eca6fdadfb", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c", Pod:"goldmane-8f77d7b6c-4qrzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ddd5bd048d", MAC:"6e:48:02:c0:ea:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:27.898806 env[1335]: 2025-05-17 00:43:27.892 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c" Namespace="calico-system" Pod="goldmane-8f77d7b6c-4qrzt" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:27.933068 env[1335]: time="2025-05-17T00:43:27.932965324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:27.933647 env[1335]: time="2025-05-17T00:43:27.933570732Z" level=info msg="StopPodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\"" May 17 00:43:27.934552 env[1335]: time="2025-05-17T00:43:27.934490992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:27.934762 env[1335]: time="2025-05-17T00:43:27.934707986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:27.935565 env[1335]: time="2025-05-17T00:43:27.935490189Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c pid=4246 runtime=io.containerd.runc.v2 May 17 00:43:27.960537 systemd-networkd[1075]: cali7bec3346c39: Link UP May 17 00:43:27.961000 audit[4265]: NETFILTER_CFG table=filter:112 family=2 entries=56 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:27.969504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7bec3346c39: link becomes ready May 17 00:43:27.961000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffeb871acb0 a2=0 a3=7ffeb871ac9c items=0 ppid=3606 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:27.961000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:27.971708 systemd-networkd[1075]: cali7bec3346c39: Gained carrier May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.717 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0 calico-kube-controllers-7f7cfb968- calico-system 877258a8-70e4-4a88-a629-d7c04d184c1d 937 0 2025-05-17 00:43:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f7cfb968 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 calico-kube-controllers-7f7cfb968-xnd89 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7bec3346c39 [] [] }} ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.718 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.791 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" HandleID="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.793 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" HandleID="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"calico-kube-controllers-7f7cfb968-xnd89", "timestamp":"2025-05-17 00:43:27.791367654 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.793 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.819 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.819 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.881 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.901 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.910 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.913 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.918 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.918 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.922 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386 May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.928 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.944 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.198/26] block=192.168.93.192/26 handle="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.945 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.198/26] handle="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.945 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:28.006085 env[1335]: 2025-05-17 00:43:27.945 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.198/26] IPv6=[] ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" HandleID="k8s-pod-network.2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.007647 env[1335]: 2025-05-17 00:43:27.957 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0", GenerateName:"calico-kube-controllers-7f7cfb968-", Namespace:"calico-system", SelfLink:"", UID:"877258a8-70e4-4a88-a629-d7c04d184c1d", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7cfb968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"calico-kube-controllers-7f7cfb968-xnd89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bec3346c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:28.007647 env[1335]: 2025-05-17 00:43:27.957 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.198/32] ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.007647 env[1335]: 2025-05-17 00:43:27.957 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bec3346c39 ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.007647 env[1335]: 2025-05-17 00:43:27.981 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.007647 env[1335]: 2025-05-17 00:43:27.981 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0", GenerateName:"calico-kube-controllers-7f7cfb968-", Namespace:"calico-system", SelfLink:"", UID:"877258a8-70e4-4a88-a629-d7c04d184c1d", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7cfb968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386", Pod:"calico-kube-controllers-7f7cfb968-xnd89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bec3346c39", MAC:"1a:68:fe:dc:b3:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:28.007647 env[1335]: 2025-05-17 00:43:28.001 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386" Namespace="calico-system" Pod="calico-kube-controllers-7f7cfb968-xnd89" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:28.078517 env[1335]: time="2025-05-17T00:43:28.076586335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:28.078517 env[1335]: time="2025-05-17T00:43:28.076635313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:28.078517 env[1335]: time="2025-05-17T00:43:28.076654102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:28.078517 env[1335]: time="2025-05-17T00:43:28.076848008Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386 pid=4299 runtime=io.containerd.runc.v2 May 17 00:43:28.125587 systemd-networkd[1075]: calic04c894dbf9: Gained IPv6LL May 17 00:43:28.203516 systemd[1]: run-netns-cni\x2dd66c047c\x2d3be4\x2df1e3\x2defe7\x2d9505c3337a75.mount: Deactivated successfully. May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.188 [INFO][4279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.188 [INFO][4279] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" iface="eth0" netns="/var/run/netns/cni-692840a8-23f8-4987-27d3-f18d893b687f" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.188 [INFO][4279] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" iface="eth0" netns="/var/run/netns/cni-692840a8-23f8-4987-27d3-f18d893b687f" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.189 [INFO][4279] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" iface="eth0" netns="/var/run/netns/cni-692840a8-23f8-4987-27d3-f18d893b687f" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.189 [INFO][4279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.189 [INFO][4279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.317 [INFO][4328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.318 [INFO][4328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.318 [INFO][4328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.333 [WARNING][4328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.333 [INFO][4328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.349 [INFO][4328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:28.354004 env[1335]: 2025-05-17 00:43:28.351 [INFO][4279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:28.359664 env[1335]: time="2025-05-17T00:43:28.359576396Z" level=info msg="TearDown network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" successfully" May 17 00:43:28.359861 env[1335]: time="2025-05-17T00:43:28.359828999Z" level=info msg="StopPodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" returns successfully" May 17 00:43:28.361692 systemd[1]: run-netns-cni\x2d692840a8\x2d23f8\x2d4987\x2d27d3\x2df18d893b687f.mount: Deactivated successfully. May 17 00:43:28.363908 env[1335]: time="2025-05-17T00:43:28.363866490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64ww9,Uid:603f05d6-04eb-4ce3-baf0-5f232fe52221,Namespace:calico-system,Attempt:1,}" May 17 00:43:28.509597 systemd-networkd[1075]: cali592dc6f6dbf: Gained IPv6LL May 17 00:43:28.657000 audit[4353]: NETFILTER_CFG table=filter:113 family=2 entries=52 op=nft_register_chain pid=4353 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:28.657000 audit[4353]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffcffbd5580 a2=0 a3=7ffcffbd556c items=0 ppid=3606 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:28.657000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:28.668000 audit[4356]: NETFILTER_CFG table=filter:114 family=2 entries=17 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:28.668000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf7cbdf70 a2=0 a3=7ffcf7cbdf5c items=0 ppid=2367 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:28.668000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:28.681000 audit[4356]: NETFILTER_CFG table=nat:115 family=2 entries=35 op=nft_register_chain pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:28.681000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcf7cbdf70 a2=0 a3=7ffcf7cbdf5c items=0 ppid=2367 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:28.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:28.817507 env[1335]: time="2025-05-17T00:43:28.817232199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-4qrzt,Uid:67d969c1-4d93-44a9-a00c-87eca6fdadfb,Namespace:calico-system,Attempt:1,} returns sandbox id \"f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c\"" May 17 00:43:28.829930 systemd-networkd[1075]: calif703cbe5bcf: Gained IPv6LL May 17 00:43:28.898811 env[1335]: time="2025-05-17T00:43:28.898737783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f7cfb968-xnd89,Uid:877258a8-70e4-4a88-a629-d7c04d184c1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386\"" May 17 00:43:28.921566 systemd-networkd[1075]: cali01b7f015dc9: Link UP May 17 00:43:28.937529 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:43:28.946694 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali01b7f015dc9: link becomes ready May 17 00:43:28.950620 systemd-networkd[1075]: cali01b7f015dc9: Gained carrier May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.629 [INFO][4334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0 csi-node-driver- calico-system 603f05d6-04eb-4ce3-baf0-5f232fe52221 951 0 2025-05-17 00:43:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 csi-node-driver-64ww9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali01b7f015dc9 [] [] }} ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.630 [INFO][4334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.810 [INFO][4355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" HandleID="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.821 [INFO][4355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" HandleID="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320500), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"csi-node-driver-64ww9", "timestamp":"2025-05-17 00:43:28.809389781 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.822 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.823 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.823 [INFO][4355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.845 [INFO][4355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.856 [INFO][4355] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.866 [INFO][4355] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.869 [INFO][4355] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.883 [INFO][4355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.883 [INFO][4355] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.886 [INFO][4355] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282 May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.898 [INFO][4355] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.914 [INFO][4355] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.199/26] block=192.168.93.192/26 handle="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.914 [INFO][4355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.199/26] handle="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.914 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:28.989462 env[1335]: 2025-05-17 00:43:28.914 [INFO][4355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.199/26] IPv6=[] ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" HandleID="k8s-pod-network.db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.990825 env[1335]: 2025-05-17 00:43:28.917 [INFO][4334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"603f05d6-04eb-4ce3-baf0-5f232fe52221", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"csi-node-driver-64ww9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01b7f015dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:28.990825 env[1335]: 2025-05-17 00:43:28.917 [INFO][4334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.199/32] ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.990825 env[1335]: 2025-05-17 00:43:28.917 [INFO][4334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01b7f015dc9 ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.990825 env[1335]: 2025-05-17 00:43:28.958 [INFO][4334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:28.990825 env[1335]: 2025-05-17 00:43:28.959 [INFO][4334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"603f05d6-04eb-4ce3-baf0-5f232fe52221", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282", Pod:"csi-node-driver-64ww9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01b7f015dc9", MAC:"96:91:20:ed:d4:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:28.990825 env[1335]: 2025-05-17 00:43:28.985 [INFO][4334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282" Namespace="calico-system" Pod="csi-node-driver-64ww9" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:29.054717 env[1335]: time="2025-05-17T00:43:29.054617888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:29.055026 env[1335]: time="2025-05-17T00:43:29.054977567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:29.055271 env[1335]: time="2025-05-17T00:43:29.055231994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:29.055774 env[1335]: time="2025-05-17T00:43:29.055722667Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282 pid=4391 runtime=io.containerd.runc.v2 May 17 00:43:29.116491 kernel: kauditd_printk_skb: 554 callbacks suppressed May 17 00:43:29.116706 kernel: audit: type=1325 audit(1747442609.104:399): table=filter:116 family=2 entries=56 op=nft_register_chain pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:29.104000 audit[4405]: NETFILTER_CFG table=filter:116 family=2 entries=56 op=nft_register_chain pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:29.104000 audit[4405]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7fff4a6689d0 a2=0 a3=7fff4a6689bc items=0 ppid=3606 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:29.104000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:29.186043 kernel: audit: type=1300 audit(1747442609.104:399): arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7fff4a6689d0 a2=0 a3=7fff4a6689bc items=0 ppid=3606 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:29.186175 kernel: audit: type=1327 audit(1747442609.104:399): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:29.251529 env[1335]: time="2025-05-17T00:43:29.251470915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64ww9,Uid:603f05d6-04eb-4ce3-baf0-5f232fe52221,Namespace:calico-system,Attempt:1,} returns sandbox id \"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282\"" May 17 00:43:29.598199 systemd-networkd[1075]: cali7bec3346c39: Gained IPv6LL May 17 00:43:29.790479 systemd-networkd[1075]: cali3ddd5bd048d: Gained IPv6LL May 17 00:43:29.936097 env[1335]: time="2025-05-17T00:43:29.935525296Z" level=info msg="StopPodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\"" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.035 [INFO][4440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.035 [INFO][4440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" iface="eth0" netns="/var/run/netns/cni-df5080dd-962f-65ed-42f9-228d30e4a838" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.036 [INFO][4440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" iface="eth0" netns="/var/run/netns/cni-df5080dd-962f-65ed-42f9-228d30e4a838" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.036 [INFO][4440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" iface="eth0" netns="/var/run/netns/cni-df5080dd-962f-65ed-42f9-228d30e4a838" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.036 [INFO][4440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.036 [INFO][4440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.103 [INFO][4447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.104 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.104 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.116 [WARNING][4447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.116 [INFO][4447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.118 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:30.122605 env[1335]: 2025-05-17 00:43:30.120 [INFO][4440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:30.122605 env[1335]: time="2025-05-17T00:43:30.122723640Z" level=info msg="TearDown network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" successfully" May 17 00:43:30.122605 env[1335]: time="2025-05-17T00:43:30.122781719Z" level=info msg="StopPodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" returns successfully" May 17 00:43:30.122605 env[1335]: time="2025-05-17T00:43:30.123692468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-szlbf,Uid:13d7a51f-4298-4a96-98ab-641457e5522e,Namespace:kube-system,Attempt:1,}" May 17 00:43:30.133683 systemd[1]: run-netns-cni\x2ddf5080dd\x2d962f\x2d65ed\x2d42f9\x2d228d30e4a838.mount: Deactivated successfully. May 17 00:43:30.402476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:43:30.404625 systemd-networkd[1075]: calif8ccc164a4b: Link UP May 17 00:43:30.410511 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif8ccc164a4b: link becomes ready May 17 00:43:30.413052 systemd-networkd[1075]: calif8ccc164a4b: Gained carrier May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.231 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0 coredns-7c65d6cfc9- kube-system 13d7a51f-4298-4a96-98ab-641457e5522e 971 0 2025-05-17 00:42:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260 coredns-7c65d6cfc9-szlbf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif8ccc164a4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.231 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.307 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" HandleID="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.308 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" HandleID="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325560), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", "pod":"coredns-7c65d6cfc9-szlbf", "timestamp":"2025-05-17 00:43:30.307028628 +0000 UTC"}, Hostname:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.308 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.308 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.308 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260' May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.324 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.337 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.345 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.349 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.354 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.93.192/26 host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.354 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.93.192/26 handle="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.356 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619 May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.365 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.93.192/26 handle="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.385 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.93.200/26] block=192.168.93.192/26 handle="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.387 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.93.200/26] handle="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" host="ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260" May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.387 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:30.448367 env[1335]: 2025-05-17 00:43:30.387 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.93.200/26] IPv6=[] ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" HandleID="k8s-pod-network.44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.450044 env[1335]: 2025-05-17 00:43:30.391 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13d7a51f-4298-4a96-98ab-641457e5522e", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"", Pod:"coredns-7c65d6cfc9-szlbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8ccc164a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:30.450044 env[1335]: 2025-05-17 00:43:30.391 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.93.200/32] ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.450044 env[1335]: 2025-05-17 00:43:30.391 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8ccc164a4b ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.450044 env[1335]: 2025-05-17 00:43:30.413 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.450044 env[1335]: 2025-05-17 00:43:30.415 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13d7a51f-4298-4a96-98ab-641457e5522e", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619", Pod:"coredns-7c65d6cfc9-szlbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8ccc164a4b", MAC:"b6:53:02:5c:54:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:30.450044 env[1335]: 2025-05-17 00:43:30.444 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619" Namespace="kube-system" Pod="coredns-7c65d6cfc9-szlbf" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:30.519017 env[1335]: time="2025-05-17T00:43:30.518927271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:43:30.519331 env[1335]: time="2025-05-17T00:43:30.519270100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:43:30.519566 env[1335]: time="2025-05-17T00:43:30.519524619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:43:30.520042 env[1335]: time="2025-05-17T00:43:30.519992136Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619 pid=4487 runtime=io.containerd.runc.v2 May 17 00:43:30.558000 audit[4500]: NETFILTER_CFG table=filter:117 family=2 entries=62 op=nft_register_chain pid=4500 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:30.585568 kernel: audit: type=1325 audit(1747442610.558:400): table=filter:117 family=2 entries=62 op=nft_register_chain pid=4500 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:43:30.599238 systemd[1]: run-containerd-runc-k8s.io-44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619-runc.gCv0cw.mount: Deactivated successfully. May 17 00:43:30.636926 kernel: audit: type=1300 audit(1747442610.558:400): arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7fffb60d3c10 a2=0 a3=7fffb60d3bfc items=0 ppid=3606 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:30.558000 audit[4500]: SYSCALL arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7fffb60d3c10 a2=0 a3=7fffb60d3bfc items=0 ppid=3606 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:30.558000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:30.676582 kernel: audit: type=1327 audit(1747442610.558:400): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:43:30.772676 env[1335]: time="2025-05-17T00:43:30.772620737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-szlbf,Uid:13d7a51f-4298-4a96-98ab-641457e5522e,Namespace:kube-system,Attempt:1,} returns sandbox id \"44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619\"" May 17 00:43:30.781034 env[1335]: time="2025-05-17T00:43:30.780975649Z" level=info msg="CreateContainer within sandbox \"44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:43:30.816556 systemd-networkd[1075]: cali01b7f015dc9: Gained IPv6LL May 17 00:43:30.828949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000762224.mount: Deactivated successfully. May 17 00:43:30.849010 env[1335]: time="2025-05-17T00:43:30.848941602Z" level=info msg="CreateContainer within sandbox \"44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2d8bc44e24ddc5a041804928510f91fa812a1fc1b17006fd71446c6560edeec\"" May 17 00:43:30.850179 env[1335]: time="2025-05-17T00:43:30.850133646Z" level=info msg="StartContainer for \"c2d8bc44e24ddc5a041804928510f91fa812a1fc1b17006fd71446c6560edeec\"" May 17 00:43:31.057292 env[1335]: time="2025-05-17T00:43:31.055611126Z" level=info msg="StartContainer for \"c2d8bc44e24ddc5a041804928510f91fa812a1fc1b17006fd71446c6560edeec\" returns successfully" May 17 00:43:31.503207 kubelet[2220]: I0517 00:43:31.502731 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-szlbf" podStartSLOduration=45.502704755 podStartE2EDuration="45.502704755s" podCreationTimestamp="2025-05-17 00:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:43:31.501645348 +0000 UTC m=+50.785975099" watchObservedRunningTime="2025-05-17 00:43:31.502704755 +0000 UTC m=+50.787034272" May 17 00:43:31.572000 audit[4563]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:31.591476 kernel: audit: type=1325 audit(1747442611.572:401): table=filter:118 family=2 entries=14 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:31.572000 audit[4563]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde7095720 a2=0 a3=7ffde709570c items=0 ppid=2367 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:31.626527 kernel: audit: type=1300 audit(1747442611.572:401): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde7095720 a2=0 a3=7ffde709570c items=0 ppid=2367 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:31.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:31.645669 kernel: audit: type=1327 audit(1747442611.572:401): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:31.647945 systemd-networkd[1075]: calif8ccc164a4b: Gained IPv6LL May 17 00:43:31.666464 kernel: audit: type=1325 audit(1747442611.626:402): table=nat:119 family=2 entries=44 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:31.626000 audit[4563]: NETFILTER_CFG table=nat:119 family=2 entries=44 op=nft_register_rule pid=4563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:31.626000 audit[4563]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffde7095720 a2=0 a3=7ffde709570c items=0 ppid=2367 pid=4563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:31.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:31.681000 audit[4565]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:31.681000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1760d060 a2=0 a3=7ffc1760d04c items=0 ppid=2367 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:31.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:31.701000 audit[4565]: NETFILTER_CFG table=nat:121 family=2 entries=56 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:31.701000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc1760d060 a2=0 a3=7ffc1760d04c items=0 ppid=2367 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:31.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:31.820608 env[1335]: time="2025-05-17T00:43:31.820519653Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.823303 env[1335]: time="2025-05-17T00:43:31.823251837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.825812 env[1335]: time="2025-05-17T00:43:31.825762238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.828533 env[1335]: time="2025-05-17T00:43:31.828484281Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:31.829371 env[1335]: time="2025-05-17T00:43:31.829311804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:43:31.831894 env[1335]: time="2025-05-17T00:43:31.831236696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:43:31.834043 env[1335]: time="2025-05-17T00:43:31.833942818Z" level=info msg="CreateContainer within sandbox \"9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:43:31.858629 env[1335]: time="2025-05-17T00:43:31.858551278Z" level=info msg="CreateContainer within sandbox \"9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9affe10db86bac2793300c358c7e439f86796f2cfd276173652617589430fadb\"" May 17 00:43:31.861871 env[1335]: time="2025-05-17T00:43:31.861804072Z" level=info msg="StartContainer for \"9affe10db86bac2793300c358c7e439f86796f2cfd276173652617589430fadb\"" May 17 00:43:31.984111 env[1335]: time="2025-05-17T00:43:31.983818725Z" level=info msg="StartContainer for \"9affe10db86bac2793300c358c7e439f86796f2cfd276173652617589430fadb\" returns successfully" May 17 00:43:32.041739 env[1335]: time="2025-05-17T00:43:32.041680386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:32.046558 env[1335]: time="2025-05-17T00:43:32.046506934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:32.050307 env[1335]: time="2025-05-17T00:43:32.050255362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:32.054393 env[1335]: time="2025-05-17T00:43:32.054344904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:32.057045 env[1335]: time="2025-05-17T00:43:32.055828744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:43:32.060359 env[1335]: time="2025-05-17T00:43:32.060307767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:43:32.063831 env[1335]: time="2025-05-17T00:43:32.063782320Z" level=info msg="CreateContainer within sandbox \"7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:43:32.087102 env[1335]: time="2025-05-17T00:43:32.086036505Z" level=info msg="CreateContainer within sandbox \"7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf7a8adfdd67cfee927f1276eb3e0e55d301e6091eb76bcaedd4d8ece93a2655\"" May 17 00:43:32.087639 env[1335]: time="2025-05-17T00:43:32.087592735Z" level=info msg="StartContainer for \"bf7a8adfdd67cfee927f1276eb3e0e55d301e6091eb76bcaedd4d8ece93a2655\"" May 17 00:43:32.182115 env[1335]: time="2025-05-17T00:43:32.182036617Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:32.184128 env[1335]: time="2025-05-17T00:43:32.184048006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:32.185758 kubelet[2220]: E0517 00:43:32.184681 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:32.185758 kubelet[2220]: E0517 00:43:32.184773 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:32.185758 kubelet[2220]: E0517 00:43:32.185195 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zx976,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-4qrzt_calico-system(67d969c1-4d93-44a9-a00c-87eca6fdadfb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:32.186570 kubelet[2220]: E0517 00:43:32.186479 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:43:32.187000 env[1335]: time="2025-05-17T00:43:32.186956030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:43:32.237801 env[1335]: time="2025-05-17T00:43:32.237737820Z" level=info msg="StartContainer for \"bf7a8adfdd67cfee927f1276eb3e0e55d301e6091eb76bcaedd4d8ece93a2655\" returns successfully" May 17 00:43:32.493380 kubelet[2220]: E0517 00:43:32.493230 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:43:32.514831 kubelet[2220]: I0517 00:43:32.514742 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bf9b9cc9-h6zp7" podStartSLOduration=31.819700875 podStartE2EDuration="36.514705104s" podCreationTimestamp="2025-05-17 00:42:56 +0000 UTC" firstStartedPulling="2025-05-17 00:43:27.135978924 +0000 UTC m=+46.420308421" lastFinishedPulling="2025-05-17 00:43:31.830983143 +0000 UTC m=+51.115312650" observedRunningTime="2025-05-17 00:43:32.510525004 +0000 UTC m=+51.794854523" watchObservedRunningTime="2025-05-17 00:43:32.514705104 +0000 UTC m=+51.799034621" May 17 00:43:32.607000 audit[4648]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:32.607000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee276dd30 a2=0 a3=7ffee276dd1c items=0 ppid=2367 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:32.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:32.616000 audit[4648]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:32.616000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee276dd30 a2=0 a3=7ffee276dd1c items=0 ppid=2367 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:32.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:33.495646 kubelet[2220]: I0517 00:43:33.493992 2220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:33.663000 audit[4651]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:33.663000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffaa10ed00 a2=0 a3=7fffaa10ecec items=0 ppid=2367 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:33.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:33.675000 audit[4651]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:33.675000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffaa10ed00 a2=0 a3=7fffaa10ecec items=0 ppid=2367 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:33.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:34.496865 kubelet[2220]: I0517 00:43:34.496820 2220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:34.981744 kubelet[2220]: I0517 00:43:34.981648 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bf9b9cc9-xp6vq" podStartSLOduration=34.145905624 podStartE2EDuration="38.981616176s" podCreationTimestamp="2025-05-17 00:42:56 +0000 UTC" firstStartedPulling="2025-05-17 00:43:27.222968201 +0000 UTC m=+46.507297699" lastFinishedPulling="2025-05-17 00:43:32.058678752 +0000 UTC m=+51.343008251" observedRunningTime="2025-05-17 00:43:32.55415467 +0000 UTC m=+51.838484190" watchObservedRunningTime="2025-05-17 00:43:34.981616176 +0000 UTC m=+54.265945707" May 17 00:43:35.083000 audit[4653]: NETFILTER_CFG table=filter:126 family=2 entries=13 op=nft_register_rule pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:35.090449 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:43:35.090602 kernel: audit: type=1325 audit(1747442615.083:409): table=filter:126 family=2 entries=13 op=nft_register_rule pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:35.083000 audit[4653]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe7ee5bb30 a2=0 a3=7ffe7ee5bb1c items=0 ppid=2367 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:35.156477 kernel: audit: type=1300 audit(1747442615.083:409): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe7ee5bb30 a2=0 a3=7ffe7ee5bb1c items=0 ppid=2367 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:35.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:35.226456 kernel: audit: type=1327 audit(1747442615.083:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:35.109000 audit[4653]: NETFILTER_CFG table=nat:127 family=2 entries=27 op=nft_register_chain pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:35.264549 kernel: audit: type=1325 audit(1747442615.109:410): table=nat:127 family=2 entries=27 op=nft_register_chain pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:35.109000 audit[4653]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe7ee5bb30 a2=0 a3=7ffe7ee5bb1c items=0 ppid=2367 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:35.300604 kernel: audit: type=1300 audit(1747442615.109:410): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe7ee5bb30 a2=0 a3=7ffe7ee5bb1c items=0 ppid=2367 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:35.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:35.350462 kernel: audit: type=1327 audit(1747442615.109:410): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:36.697057 env[1335]: time="2025-05-17T00:43:36.697006004Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:36.699661 env[1335]: time="2025-05-17T00:43:36.699615107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:36.702095 env[1335]: time="2025-05-17T00:43:36.702048268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:36.705080 env[1335]: time="2025-05-17T00:43:36.705026798Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:36.707123 env[1335]: time="2025-05-17T00:43:36.707055675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:43:36.712833 env[1335]: time="2025-05-17T00:43:36.712786784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:43:36.753706 env[1335]: time="2025-05-17T00:43:36.752591167Z" level=info msg="CreateContainer within sandbox \"2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:43:36.794905 env[1335]: time="2025-05-17T00:43:36.794838368Z" level=info msg="CreateContainer within sandbox \"2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330\"" May 17 00:43:36.797591 env[1335]: time="2025-05-17T00:43:36.797534005Z" level=info msg="StartContainer for \"685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330\"" May 17 00:43:36.802606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848275737.mount: Deactivated successfully. May 17 00:43:36.979834 env[1335]: time="2025-05-17T00:43:36.979655069Z" level=info msg="StartContainer for \"685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330\" returns successfully" May 17 00:43:37.714539 kubelet[2220]: I0517 00:43:37.713105 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f7cfb968-xnd89" podStartSLOduration=27.904797411 podStartE2EDuration="35.713076453s" podCreationTimestamp="2025-05-17 00:43:02 +0000 UTC" firstStartedPulling="2025-05-17 00:43:28.900543966 +0000 UTC m=+48.184873471" lastFinishedPulling="2025-05-17 00:43:36.708823003 +0000 UTC m=+55.993152513" observedRunningTime="2025-05-17 00:43:37.534214105 +0000 UTC m=+56.818543623" watchObservedRunningTime="2025-05-17 00:43:37.713076453 +0000 UTC m=+56.997405971" May 17 00:43:37.994032 env[1335]: time="2025-05-17T00:43:37.993864791Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:37.997128 env[1335]: time="2025-05-17T00:43:37.997066464Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:37.999952 env[1335]: time="2025-05-17T00:43:37.999871101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:38.002323 env[1335]: time="2025-05-17T00:43:38.002269363Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:38.003186 env[1335]: time="2025-05-17T00:43:38.003127831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:43:38.006772 env[1335]: time="2025-05-17T00:43:38.006727142Z" level=info msg="CreateContainer within sandbox \"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:43:38.040818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953075032.mount: Deactivated successfully. May 17 00:43:38.045684 env[1335]: time="2025-05-17T00:43:38.045589128Z" level=info msg="CreateContainer within sandbox \"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d5a82ec4f24492e551e42fed6fd4c19b3defa5c6a9b887a68907263a1901bb6\"" May 17 00:43:38.048344 env[1335]: time="2025-05-17T00:43:38.048251520Z" level=info msg="StartContainer for \"5d5a82ec4f24492e551e42fed6fd4c19b3defa5c6a9b887a68907263a1901bb6\"" May 17 00:43:38.170580 env[1335]: time="2025-05-17T00:43:38.170368665Z" level=info msg="StartContainer for \"5d5a82ec4f24492e551e42fed6fd4c19b3defa5c6a9b887a68907263a1901bb6\" returns successfully" May 17 00:43:38.175660 env[1335]: time="2025-05-17T00:43:38.175606879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:43:38.727876 systemd[1]: run-containerd-runc-k8s.io-5d5a82ec4f24492e551e42fed6fd4c19b3defa5c6a9b887a68907263a1901bb6-runc.dYujK1.mount: Deactivated successfully. May 17 00:43:39.548194 env[1335]: time="2025-05-17T00:43:39.548114693Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:39.554464 env[1335]: time="2025-05-17T00:43:39.554011151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:39.564195 env[1335]: time="2025-05-17T00:43:39.563774175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:39.569407 env[1335]: time="2025-05-17T00:43:39.568167661Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:43:39.569407 env[1335]: time="2025-05-17T00:43:39.569187021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:43:39.578697 env[1335]: time="2025-05-17T00:43:39.578636240Z" level=info msg="CreateContainer within sandbox \"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:43:39.613829 env[1335]: time="2025-05-17T00:43:39.613768418Z" level=info msg="CreateContainer within sandbox \"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"91c346dc9e0e8aa7d56d41df505c838f57ec6cc3df452a0678a34138a97aa5b6\"" May 17 00:43:39.615179 env[1335]: time="2025-05-17T00:43:39.615138105Z" level=info msg="StartContainer for \"91c346dc9e0e8aa7d56d41df505c838f57ec6cc3df452a0678a34138a97aa5b6\"" May 17 00:43:39.727328 systemd[1]: run-containerd-runc-k8s.io-91c346dc9e0e8aa7d56d41df505c838f57ec6cc3df452a0678a34138a97aa5b6-runc.MVVnyL.mount: Deactivated successfully. May 17 00:43:39.811118 env[1335]: time="2025-05-17T00:43:39.811064967Z" level=info msg="StartContainer for \"91c346dc9e0e8aa7d56d41df505c838f57ec6cc3df452a0678a34138a97aa5b6\" returns successfully" May 17 00:43:40.224379 kubelet[2220]: I0517 00:43:40.224260 2220 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:43:40.225188 kubelet[2220]: I0517 00:43:40.225144 2220 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:43:40.922484 env[1335]: time="2025-05-17T00:43:40.922075343Z" level=info msg="StopPodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\"" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:40.991 [WARNING][4806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13d7a51f-4298-4a96-98ab-641457e5522e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619", Pod:"coredns-7c65d6cfc9-szlbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8ccc164a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:40.992 [INFO][4806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:40.992 [INFO][4806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" iface="eth0" netns="" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:40.992 [INFO][4806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:40.992 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.034 [INFO][4814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.034 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.035 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.044 [WARNING][4814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.044 [INFO][4814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.046 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.050290 env[1335]: 2025-05-17 00:43:41.048 [INFO][4806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.052857 env[1335]: time="2025-05-17T00:43:41.050336346Z" level=info msg="TearDown network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" successfully" May 17 00:43:41.052857 env[1335]: time="2025-05-17T00:43:41.050378873Z" level=info msg="StopPodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" returns successfully" May 17 00:43:41.052857 env[1335]: time="2025-05-17T00:43:41.051131821Z" level=info msg="RemovePodSandbox for \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\"" May 17 00:43:41.052857 env[1335]: time="2025-05-17T00:43:41.051232948Z" level=info msg="Forcibly stopping sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\"" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.104 [WARNING][4830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"13d7a51f-4298-4a96-98ab-641457e5522e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"44d8d417101a329964f9437ada640e2058af9466ac5980ea4c35200dc76cf619", Pod:"coredns-7c65d6cfc9-szlbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8ccc164a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.104 [INFO][4830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.105 [INFO][4830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" iface="eth0" netns="" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.105 [INFO][4830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.105 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.144 [INFO][4838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.144 [INFO][4838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.145 [INFO][4838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.157 [WARNING][4838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.158 [INFO][4838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" HandleID="k8s-pod-network.94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--szlbf-eth0" May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.165 [INFO][4838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.170810 env[1335]: 2025-05-17 00:43:41.168 [INFO][4830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783" May 17 00:43:41.171956 env[1335]: time="2025-05-17T00:43:41.170860358Z" level=info msg="TearDown network for sandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" successfully" May 17 00:43:41.177746 env[1335]: time="2025-05-17T00:43:41.177602017Z" level=info msg="RemovePodSandbox \"94bd9b8302f4690ca113becda2d629de95a1b083156febbc5bfc5fd98e123783\" returns successfully" May 17 00:43:41.179649 env[1335]: time="2025-05-17T00:43:41.179609666Z" level=info msg="StopPodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\"" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.244 [WARNING][4854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"67d969c1-4d93-44a9-a00c-87eca6fdadfb", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c", Pod:"goldmane-8f77d7b6c-4qrzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ddd5bd048d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.244 [INFO][4854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.244 [INFO][4854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" iface="eth0" netns="" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.244 [INFO][4854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.244 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.281 [INFO][4861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.282 [INFO][4861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.282 [INFO][4861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.292 [WARNING][4861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.292 [INFO][4861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.294 [INFO][4861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.298496 env[1335]: 2025-05-17 00:43:41.296 [INFO][4854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.299551 env[1335]: time="2025-05-17T00:43:41.298552434Z" level=info msg="TearDown network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" successfully" May 17 00:43:41.299551 env[1335]: time="2025-05-17T00:43:41.298599169Z" level=info msg="StopPodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" returns successfully" May 17 00:43:41.300252 env[1335]: time="2025-05-17T00:43:41.300215264Z" level=info msg="RemovePodSandbox for \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\"" May 17 00:43:41.300536 env[1335]: time="2025-05-17T00:43:41.300416349Z" level=info msg="Forcibly stopping sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\"" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.351 [WARNING][4876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"67d969c1-4d93-44a9-a00c-87eca6fdadfb", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"f0e11e85a6c947b7add0b70d4a0e810f732916dfc6127e0a0d2516f668351e9c", Pod:"goldmane-8f77d7b6c-4qrzt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.93.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3ddd5bd048d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.351 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.351 [INFO][4876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" iface="eth0" netns="" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.351 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.351 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.382 [INFO][4883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.382 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.383 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.391 [WARNING][4883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.391 [INFO][4883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" HandleID="k8s-pod-network.a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-goldmane--8f77d7b6c--4qrzt-eth0" May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.393 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.397533 env[1335]: 2025-05-17 00:43:41.395 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a" May 17 00:43:41.398695 env[1335]: time="2025-05-17T00:43:41.397813584Z" level=info msg="TearDown network for sandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" successfully" May 17 00:43:41.405306 env[1335]: time="2025-05-17T00:43:41.405240333Z" level=info msg="RemovePodSandbox \"a3e2478d372cf7a76e6bb768df19b0c5414d95ae0b5174e46bb2420140fdd31a\" returns successfully" May 17 00:43:41.405973 env[1335]: time="2025-05-17T00:43:41.405928033Z" level=info msg="StopPodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\"" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.499 [WARNING][4897] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a2ff53-9d59-4c60-9b54-47fa2348be26", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733", Pod:"calico-apiserver-bf9b9cc9-h6zp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic04c894dbf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.500 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.500 [INFO][4897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" iface="eth0" netns="" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.500 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.500 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.539 [INFO][4907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.540 [INFO][4907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.540 [INFO][4907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.549 [WARNING][4907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.550 [INFO][4907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.552 [INFO][4907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.558359 env[1335]: 2025-05-17 00:43:41.554 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.560266 env[1335]: time="2025-05-17T00:43:41.558328575Z" level=info msg="TearDown network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" successfully" May 17 00:43:41.560417 env[1335]: time="2025-05-17T00:43:41.560243263Z" level=info msg="StopPodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" returns successfully" May 17 00:43:41.560981 env[1335]: time="2025-05-17T00:43:41.560936170Z" level=info msg="RemovePodSandbox for \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\"" May 17 00:43:41.561350 env[1335]: time="2025-05-17T00:43:41.561243015Z" level=info msg="Forcibly stopping sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\"" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.613 [WARNING][4921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a2ff53-9d59-4c60-9b54-47fa2348be26", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"9e8f9ecf14fa8c53d57c44ee494b6ab9f003a1b1c031c2eeaf2a350c30263733", Pod:"calico-apiserver-bf9b9cc9-h6zp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic04c894dbf9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.614 [INFO][4921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.614 [INFO][4921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" iface="eth0" netns="" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.614 [INFO][4921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.614 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.644 [INFO][4928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.644 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.645 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.654 [WARNING][4928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.654 [INFO][4928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" HandleID="k8s-pod-network.b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--h6zp7-eth0" May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.657 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.661216 env[1335]: 2025-05-17 00:43:41.659 [INFO][4921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e" May 17 00:43:41.662309 env[1335]: time="2025-05-17T00:43:41.661275719Z" level=info msg="TearDown network for sandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" successfully" May 17 00:43:41.666591 env[1335]: time="2025-05-17T00:43:41.666531890Z" level=info msg="RemovePodSandbox \"b21def40dfada13c2c64cca48c2528f1ae02fcef9908587291c16b997b73f32e\" returns successfully" May 17 00:43:41.667369 env[1335]: time="2025-05-17T00:43:41.667304227Z" level=info msg="StopPodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\"" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.726 [WARNING][4942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b0f4094-a401-446d-a3d3-fefd3d968c34", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571", Pod:"coredns-7c65d6cfc9-wllmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali592dc6f6dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.727 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.727 [INFO][4942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" iface="eth0" netns="" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.727 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.727 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.757 [INFO][4949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.757 [INFO][4949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.758 [INFO][4949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.767 [WARNING][4949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.767 [INFO][4949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.769 [INFO][4949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.773502 env[1335]: 2025-05-17 00:43:41.771 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.774660 env[1335]: time="2025-05-17T00:43:41.773547179Z" level=info msg="TearDown network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" successfully" May 17 00:43:41.774660 env[1335]: time="2025-05-17T00:43:41.773629205Z" level=info msg="StopPodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" returns successfully" May 17 00:43:41.774660 env[1335]: time="2025-05-17T00:43:41.774533690Z" level=info msg="RemovePodSandbox for \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\"" May 17 00:43:41.774660 env[1335]: time="2025-05-17T00:43:41.774581037Z" level=info msg="Forcibly stopping sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\"" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.838 [WARNING][4964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2b0f4094-a401-446d-a3d3-fefd3d968c34", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"6c96a2b0aa655b73da973888545d895da2475d6dc0dfbd12e693841cf43a3571", Pod:"coredns-7c65d6cfc9-wllmd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali592dc6f6dbf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.838 [INFO][4964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.838 [INFO][4964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" iface="eth0" netns="" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.838 [INFO][4964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.838 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.871 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.871 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.871 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.881 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.881 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" HandleID="k8s-pod-network.c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-coredns--7c65d6cfc9--wllmd-eth0" May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.883 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.887957 env[1335]: 2025-05-17 00:43:41.885 [INFO][4964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b" May 17 00:43:41.889161 env[1335]: time="2025-05-17T00:43:41.887997199Z" level=info msg="TearDown network for sandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" successfully" May 17 00:43:41.894044 env[1335]: time="2025-05-17T00:43:41.893980963Z" level=info msg="RemovePodSandbox \"c74add64f94e72d92249be9e04594893707b45ceebe262272cc0d7ec5930345b\" returns successfully" May 17 00:43:41.894823 env[1335]: time="2025-05-17T00:43:41.894780323Z" level=info msg="StopPodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\"" May 17 00:43:41.941215 env[1335]: time="2025-05-17T00:43:41.933977158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.951 [WARNING][4986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"603f05d6-04eb-4ce3-baf0-5f232fe52221", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282", Pod:"csi-node-driver-64ww9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01b7f015dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.951 [INFO][4986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.951 [INFO][4986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" iface="eth0" netns="" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.952 [INFO][4986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.952 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.979 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.980 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.980 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.990 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.990 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.992 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:41.995724 env[1335]: 2025-05-17 00:43:41.993 [INFO][4986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:41.996887 env[1335]: time="2025-05-17T00:43:41.995781889Z" level=info msg="TearDown network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" successfully" May 17 00:43:41.996887 env[1335]: time="2025-05-17T00:43:41.995825341Z" level=info msg="StopPodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" returns successfully" May 17 00:43:41.996887 env[1335]: time="2025-05-17T00:43:41.996646463Z" level=info msg="RemovePodSandbox for \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\"" May 17 00:43:41.996887 env[1335]: time="2025-05-17T00:43:41.996692736Z" level=info msg="Forcibly stopping sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\"" May 17 00:43:42.062967 env[1335]: time="2025-05-17T00:43:42.062826770Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:42.064319 env[1335]: time="2025-05-17T00:43:42.064240050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:42.064729 kubelet[2220]: E0517 00:43:42.064673 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:42.065328 kubelet[2220]: E0517 00:43:42.064773 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:43:42.065328 kubelet[2220]: E0517 00:43:42.065203 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247aae74375a41939121aaa0f0cbe7f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:42.070387 env[1335]: time="2025-05-17T00:43:42.070340285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.050 [WARNING][5007] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"603f05d6-04eb-4ce3-baf0-5f232fe52221", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"db261d879acd1bbf2a170b11413ba008ae594a0423b4841b8a9bf79210023282", Pod:"csi-node-driver-64ww9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.93.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01b7f015dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.050 [INFO][5007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.050 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" iface="eth0" netns="" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.050 [INFO][5007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.051 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.098 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.099 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.099 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.110 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.110 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" HandleID="k8s-pod-network.9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-csi--node--driver--64ww9-eth0" May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.112 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.117123 env[1335]: 2025-05-17 00:43:42.114 [INFO][5007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04" May 17 00:43:42.118209 env[1335]: time="2025-05-17T00:43:42.117150097Z" level=info msg="TearDown network for sandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" successfully" May 17 00:43:42.123203 env[1335]: time="2025-05-17T00:43:42.123143508Z" level=info msg="RemovePodSandbox \"9ca25d08d453e1f77355d1c4b335a41cb933254cf960f2d62f40b1a389256d04\" returns successfully" May 17 00:43:42.123837 env[1335]: time="2025-05-17T00:43:42.123784378Z" level=info msg="StopPodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\"" May 17 00:43:42.198158 env[1335]: time="2025-05-17T00:43:42.193988394Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:42.198158 env[1335]: time="2025-05-17T00:43:42.195793431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:42.198458 kubelet[2220]: E0517 00:43:42.196139 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:42.198458 kubelet[2220]: E0517 00:43:42.196268 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:43:42.198458 kubelet[2220]: E0517 00:43:42.196534 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:42.198458 kubelet[2220]: E0517 00:43:42.198058 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.181 [WARNING][5031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0", GenerateName:"calico-kube-controllers-7f7cfb968-", Namespace:"calico-system", SelfLink:"", UID:"877258a8-70e4-4a88-a629-d7c04d184c1d", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7cfb968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386", Pod:"calico-kube-controllers-7f7cfb968-xnd89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bec3346c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.181 [INFO][5031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.181 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" iface="eth0" netns="" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.181 [INFO][5031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.181 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.220 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.220 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.220 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.229 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.230 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.232 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.236520 env[1335]: 2025-05-17 00:43:42.234 [INFO][5031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.237768 env[1335]: time="2025-05-17T00:43:42.237556502Z" level=info msg="TearDown network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" successfully" May 17 00:43:42.237768 env[1335]: time="2025-05-17T00:43:42.237604409Z" level=info msg="StopPodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" returns successfully" May 17 00:43:42.238627 env[1335]: time="2025-05-17T00:43:42.238504748Z" level=info msg="RemovePodSandbox for \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\"" May 17 00:43:42.238627 env[1335]: time="2025-05-17T00:43:42.238558172Z" level=info msg="Forcibly stopping sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\"" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.287 [WARNING][5052] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0", GenerateName:"calico-kube-controllers-7f7cfb968-", Namespace:"calico-system", SelfLink:"", UID:"877258a8-70e4-4a88-a629-d7c04d184c1d", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 43, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f7cfb968", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"2957be688bd9cbce9b260c5c0f1b71f8c2d13b58df43cc7360cb852cbfe36386", Pod:"calico-kube-controllers-7f7cfb968-xnd89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bec3346c39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.288 [INFO][5052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.288 [INFO][5052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" iface="eth0" netns="" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.288 [INFO][5052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.288 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.324 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.324 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.324 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.334 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.334 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" HandleID="k8s-pod-network.93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--kube--controllers--7f7cfb968--xnd89-eth0" May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.336 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.340545 env[1335]: 2025-05-17 00:43:42.338 [INFO][5052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b" May 17 00:43:42.341754 env[1335]: time="2025-05-17T00:43:42.340547107Z" level=info msg="TearDown network for sandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" successfully" May 17 00:43:42.346316 env[1335]: time="2025-05-17T00:43:42.346249875Z" level=info msg="RemovePodSandbox \"93b9a36a879860ce4e519370ef46755f40cde3d17a808be0e2a6a9c4f6e89a1b\" returns successfully" May 17 00:43:42.347181 env[1335]: time="2025-05-17T00:43:42.347107637Z" level=info msg="StopPodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\"" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.395 [WARNING][5073] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ab41193-998e-4949-bcb0-dcdfdf7aa08f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a", Pod:"calico-apiserver-bf9b9cc9-xp6vq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif703cbe5bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.396 [INFO][5073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.396 [INFO][5073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" iface="eth0" netns="" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.396 [INFO][5073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.396 [INFO][5073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.429 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.429 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.429 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.438 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.438 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.441 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.445530 env[1335]: 2025-05-17 00:43:42.443 [INFO][5073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.450721 env[1335]: time="2025-05-17T00:43:42.445568717Z" level=info msg="TearDown network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" successfully" May 17 00:43:42.450721 env[1335]: time="2025-05-17T00:43:42.445615704Z" level=info msg="StopPodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" returns successfully" May 17 00:43:42.450721 env[1335]: time="2025-05-17T00:43:42.446281394Z" level=info msg="RemovePodSandbox for \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\"" May 17 00:43:42.450721 env[1335]: time="2025-05-17T00:43:42.446335619Z" level=info msg="Forcibly stopping sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\"" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.502 [WARNING][5096] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0", GenerateName:"calico-apiserver-bf9b9cc9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ab41193-998e-4949-bcb0-dcdfdf7aa08f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 42, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf9b9cc9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-7-nightly-20250516-2100-d0fa8aa09af3236cc260", ContainerID:"7df4f6f058bd3a891307d3bd12c5c00a0213aa08cc2433a16c1bfcd29611a22a", Pod:"calico-apiserver-bf9b9cc9-xp6vq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif703cbe5bcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.503 [INFO][5096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.503 [INFO][5096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" iface="eth0" netns="" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.503 [INFO][5096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.503 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.563 [INFO][5104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.564 [INFO][5104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.564 [INFO][5104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.571 [WARNING][5104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.571 [INFO][5104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" HandleID="k8s-pod-network.655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-calico--apiserver--bf9b9cc9--xp6vq-eth0" May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.573 [INFO][5104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.577534 env[1335]: 2025-05-17 00:43:42.575 [INFO][5096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e" May 17 00:43:42.578655 env[1335]: time="2025-05-17T00:43:42.578579821Z" level=info msg="TearDown network for sandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" successfully" May 17 00:43:42.584305 env[1335]: time="2025-05-17T00:43:42.584228082Z" level=info msg="RemovePodSandbox \"655d898f2d67a8eaf434feb1f5ac2cf9554ac6df2a67301aa09b7a52eedf380e\" returns successfully" May 17 00:43:42.584981 env[1335]: time="2025-05-17T00:43:42.584925871Z" level=info msg="StopPodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\"" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.632 [WARNING][5118] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.632 [INFO][5118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.632 [INFO][5118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" iface="eth0" netns="" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.632 [INFO][5118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.632 [INFO][5118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.663 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.663 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.663 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.673 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.673 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.675 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.679296 env[1335]: 2025-05-17 00:43:42.677 [INFO][5118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.680209 env[1335]: time="2025-05-17T00:43:42.679344284Z" level=info msg="TearDown network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" successfully" May 17 00:43:42.680209 env[1335]: time="2025-05-17T00:43:42.679388672Z" level=info msg="StopPodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" returns successfully" May 17 00:43:42.680615 env[1335]: time="2025-05-17T00:43:42.680572536Z" level=info msg="RemovePodSandbox for \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\"" May 17 00:43:42.680942 env[1335]: time="2025-05-17T00:43:42.680790065Z" level=info msg="Forcibly stopping sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\"" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.739 [WARNING][5140] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" WorkloadEndpoint="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.739 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.739 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" iface="eth0" netns="" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.739 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.739 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.771 [INFO][5148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.771 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.772 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.782 [WARNING][5148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.782 [INFO][5148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" HandleID="k8s-pod-network.47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" Workload="ci--3510--3--7--nightly--20250516--2100--d0fa8aa09af3236cc260-k8s-whisker--7bcc67d899--smqwx-eth0" May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.785 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:43:42.790683 env[1335]: 2025-05-17 00:43:42.787 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6" May 17 00:43:42.792548 env[1335]: time="2025-05-17T00:43:42.790636299Z" level=info msg="TearDown network for sandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" successfully" May 17 00:43:42.802891 env[1335]: time="2025-05-17T00:43:42.802792336Z" level=info msg="RemovePodSandbox \"47052eef6e90e1a287f47cb3925b60178e7d86a377b782f231f838c1d9bc63f6\" returns successfully" May 17 00:43:43.706570 systemd[1]: run-containerd-runc-k8s.io-685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330-runc.p68Jdf.mount: Deactivated successfully. May 17 00:43:46.896008 kubelet[2220]: I0517 00:43:46.895945 2220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:43:46.928928 kubelet[2220]: I0517 00:43:46.928840 2220 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-64ww9" podStartSLOduration=34.607589607 podStartE2EDuration="44.928811989s" podCreationTimestamp="2025-05-17 00:43:02 +0000 UTC" firstStartedPulling="2025-05-17 00:43:29.253677841 +0000 UTC m=+48.538007338" lastFinishedPulling="2025-05-17 00:43:39.574900226 +0000 UTC m=+58.859229720" observedRunningTime="2025-05-17 00:43:40.54108349 +0000 UTC m=+59.825413009" watchObservedRunningTime="2025-05-17 00:43:46.928811989 +0000 UTC m=+66.213141511" May 17 00:43:46.937558 env[1335]: time="2025-05-17T00:43:46.937499579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:43:46.981000 audit[5181]: NETFILTER_CFG table=filter:128 family=2 entries=12 op=nft_register_rule pid=5181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:46.981000 audit[5181]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd63e6e0f0 a2=0 a3=7ffd63e6e0dc items=0 ppid=2367 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:47.033281 kernel: audit: type=1325 audit(1747442626.981:411): table=filter:128 family=2 entries=12 op=nft_register_rule pid=5181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:47.033546 kernel: audit: type=1300 audit(1747442626.981:411): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd63e6e0f0 a2=0 a3=7ffd63e6e0dc items=0 ppid=2367 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:46.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:47.035000 audit[5181]: NETFILTER_CFG table=nat:129 family=2 entries=34 op=nft_register_chain pid=5181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:47.064538 env[1335]: time="2025-05-17T00:43:47.064379677Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:43:47.065707 kernel: audit: type=1327 audit(1747442626.981:411): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:47.065827 kernel: audit: type=1325 audit(1747442627.035:412): table=nat:129 family=2 entries=34 op=nft_register_chain pid=5181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:43:47.065898 kernel: audit: type=1300 audit(1747442627.035:412): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd63e6e0f0 a2=0 a3=7ffd63e6e0dc items=0 ppid=2367 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:47.035000 audit[5181]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd63e6e0f0 a2=0 a3=7ffd63e6e0dc items=0 ppid=2367 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:47.035000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:47.114457 kernel: audit: type=1327 audit(1747442627.035:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:43:47.114965 env[1335]: time="2025-05-17T00:43:47.114878434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:43:47.115629 kubelet[2220]: E0517 00:43:47.115567 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:47.115902 kubelet[2220]: E0517 00:43:47.115837 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:43:47.116496 kubelet[2220]: E0517 00:43:47.116386 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zx976,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-4qrzt_calico-system(67d969c1-4d93-44a9-a00c-87eca6fdadfb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:43:47.118628 kubelet[2220]: E0517 00:43:47.118564 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:43:55.913583 systemd[1]: Started sshd@7-10.128.0.56:22-139.178.89.65:37600.service. May 17 00:43:55.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.56:22-139.178.89.65:37600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:55.940462 kernel: audit: type=1130 audit(1747442635.914:413): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.56:22-139.178.89.65:37600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:43:55.957099 kubelet[2220]: E0517 00:43:55.957044 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:43:56.266000 audit[5210]: USER_ACCT pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.297454 kernel: audit: type=1101 audit(1747442636.266:414): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.297536 sshd[5210]: Accepted publickey for core from 139.178.89.65 port 37600 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:43:56.300062 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:56.298000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.331456 kernel: audit: type=1103 audit(1747442636.298:415): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.350455 kernel: audit: type=1006 audit(1747442636.298:416): pid=5210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 May 17 00:43:56.352471 systemd[1]: Started session-8.scope. May 17 00:43:56.353213 systemd-logind[1312]: New session 8 of user core. May 17 00:43:56.298000 audit[5210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6c128130 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:56.383589 kernel: audit: type=1300 audit(1747442636.298:416): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6c128130 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:43:56.298000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:43:56.397000 audit[5210]: USER_START pid=5210 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.460535 kernel: audit: type=1327 audit(1747442636.298:416): proctitle=737368643A20636F7265205B707269765D May 17 00:43:56.460708 kernel: audit: type=1105 audit(1747442636.397:417): pid=5210 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.401000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.486468 kernel: audit: type=1103 audit(1747442636.401:418): pid=5213 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.797503 sshd[5210]: pam_unix(sshd:session): session closed for user core May 17 00:43:56.799000 audit[5210]: USER_END pid=5210 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.834768 kernel: audit: type=1106 audit(1747442636.799:419): pid=5210 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.834887 systemd[1]: sshd@7-10.128.0.56:22-139.178.89.65:37600.service: Deactivated successfully. May 17 00:43:56.837708 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:43:56.838562 systemd-logind[1312]: Session 8 logged out. Waiting for processes to exit. May 17 00:43:56.841048 systemd-logind[1312]: Removed session 8. May 17 00:43:56.799000 audit[5210]: CRED_DISP pid=5210 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.881522 kernel: audit: type=1104 audit(1747442636.799:420): pid=5210 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:43:56.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.56:22-139.178.89.65:37600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:00.939328 kubelet[2220]: E0517 00:44:00.939279 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:44:01.844959 systemd[1]: Started sshd@8-10.128.0.56:22-139.178.89.65:36816.service. May 17 00:44:01.878251 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:01.878455 kernel: audit: type=1130 audit(1747442641.846:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.56:22-139.178.89.65:36816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:01.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.56:22-139.178.89.65:36816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:02.179000 audit[5225]: USER_ACCT pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.209723 kernel: audit: type=1101 audit(1747442642.179:423): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.210339 sshd[5225]: Accepted publickey for core from 139.178.89.65 port 36816 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:02.215719 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:02.232855 systemd[1]: Started session-9.scope. May 17 00:44:02.234618 systemd-logind[1312]: New session 9 of user core. May 17 00:44:02.214000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.286477 kernel: audit: type=1103 audit(1747442642.214:424): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.324687 kernel: audit: type=1006 audit(1747442642.214:425): pid=5225 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 17 00:44:02.214000 audit[5225]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8bcc7770 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:02.363583 kernel: audit: type=1300 audit(1747442642.214:425): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8bcc7770 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:02.214000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:02.373579 kernel: audit: type=1327 audit(1747442642.214:425): proctitle=737368643A20636F7265205B707269765D May 17 00:44:02.255000 audit[5225]: USER_START pid=5225 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.406591 kernel: audit: type=1105 audit(1747442642.255:426): pid=5225 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.260000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.433475 kernel: audit: type=1103 audit(1747442642.260:427): pid=5228 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.674744 sshd[5225]: pam_unix(sshd:session): session closed for user core May 17 00:44:02.675000 audit[5225]: USER_END pid=5225 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.710501 kernel: audit: type=1106 audit(1747442642.675:428): pid=5225 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.735924 kernel: audit: type=1104 audit(1747442642.678:429): pid=5225 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.678000 audit[5225]: CRED_DISP pid=5225 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:02.738854 systemd[1]: sshd@8-10.128.0.56:22-139.178.89.65:36816.service: Deactivated successfully. May 17 00:44:02.741356 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:44:02.744378 systemd-logind[1312]: Session 9 logged out. Waiting for processes to exit. May 17 00:44:02.748143 systemd-logind[1312]: Removed session 9. May 17 00:44:02.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.56:22-139.178.89.65:36816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:07.751701 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:07.751946 kernel: audit: type=1130 audit(1747442647.720:431): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.56:22-139.178.89.65:40266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:07.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.56:22-139.178.89.65:40266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:07.721317 systemd[1]: Started sshd@9-10.128.0.56:22-139.178.89.65:40266.service. May 17 00:44:08.048000 audit[5244]: USER_ACCT pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.057906 sshd[5244]: Accepted publickey for core from 139.178.89.65 port 40266 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:08.080462 kernel: audit: type=1101 audit(1747442648.048:432): pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.078000 audit[5244]: CRED_ACQ pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.081529 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:08.107600 kernel: audit: type=1103 audit(1747442648.078:433): pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.107805 kernel: audit: type=1006 audit(1747442648.078:434): pid=5244 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 17 00:44:08.133590 systemd-logind[1312]: New session 10 of user core. May 17 00:44:08.134079 systemd[1]: Started session-10.scope. May 17 00:44:08.078000 audit[5244]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffba57e990 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:08.170601 kernel: audit: type=1300 audit(1747442648.078:434): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffba57e990 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:08.078000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:08.181474 kernel: audit: type=1327 audit(1747442648.078:434): proctitle=737368643A20636F7265205B707269765D May 17 00:44:08.155000 audit[5244]: USER_START pid=5244 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.155000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.239155 kernel: audit: type=1105 audit(1747442648.155:435): pid=5244 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.239387 kernel: audit: type=1103 audit(1747442648.155:436): pid=5247 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.506773 sshd[5244]: pam_unix(sshd:session): session closed for user core May 17 00:44:08.507000 audit[5244]: USER_END pid=5244 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.532926 systemd-logind[1312]: Session 10 logged out. Waiting for processes to exit. May 17 00:44:08.535556 systemd[1]: sshd@9-10.128.0.56:22-139.178.89.65:40266.service: Deactivated successfully. May 17 00:44:08.537265 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:44:08.540771 systemd-logind[1312]: Removed session 10. May 17 00:44:08.543485 kernel: audit: type=1106 audit(1747442648.507:437): pid=5244 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.527000 audit[5244]: CRED_DISP pid=5244 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.572461 kernel: audit: type=1104 audit(1747442648.527:438): pid=5244 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.577664 systemd[1]: Started sshd@10-10.128.0.56:22-139.178.89.65:40280.service. May 17 00:44:08.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.56:22-139.178.89.65:40266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:08.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.56:22-139.178.89.65:40280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:08.888000 audit[5258]: USER_ACCT pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.891535 sshd[5258]: Accepted publickey for core from 139.178.89.65 port 40280 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:08.890000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.890000 audit[5258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffae248940 a2=3 a3=0 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:08.890000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:08.892999 sshd[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:08.902712 systemd[1]: Started session-11.scope. May 17 00:44:08.903814 systemd-logind[1312]: New session 11 of user core. May 17 00:44:08.924000 audit[5258]: USER_START pid=5258 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:08.927000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:09.396018 sshd[5258]: pam_unix(sshd:session): session closed for user core May 17 00:44:09.396000 audit[5258]: USER_END pid=5258 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:09.397000 audit[5258]: CRED_DISP pid=5258 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:09.402729 systemd-logind[1312]: Session 11 logged out. Waiting for processes to exit. May 17 00:44:09.413679 systemd[1]: sshd@10-10.128.0.56:22-139.178.89.65:40280.service: Deactivated successfully. May 17 00:44:09.415222 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:44:09.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.56:22-139.178.89.65:40280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:09.418736 systemd-logind[1312]: Removed session 11. May 17 00:44:09.445801 systemd[1]: Started sshd@11-10.128.0.56:22-139.178.89.65:40292.service. May 17 00:44:09.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.56:22-139.178.89.65:40292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:09.759000 audit[5268]: USER_ACCT pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:09.763129 sshd[5268]: Accepted publickey for core from 139.178.89.65 port 40292 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:09.762000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:09.762000 audit[5268]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaf45c640 a2=3 a3=0 items=0 ppid=1 pid=5268 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:09.762000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:09.764510 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:09.777247 systemd[1]: Started session-12.scope. May 17 00:44:09.778524 systemd-logind[1312]: New session 12 of user core. May 17 00:44:09.798000 audit[5268]: USER_START pid=5268 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:09.802000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:10.168772 sshd[5268]: pam_unix(sshd:session): session closed for user core May 17 00:44:10.169000 audit[5268]: USER_END pid=5268 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:10.170000 audit[5268]: CRED_DISP pid=5268 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:10.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.56:22-139.178.89.65:40292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:10.174591 systemd[1]: sshd@11-10.128.0.56:22-139.178.89.65:40292.service: Deactivated successfully. May 17 00:44:10.178366 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:44:10.179212 systemd-logind[1312]: Session 12 logged out. Waiting for processes to exit. May 17 00:44:10.182691 systemd-logind[1312]: Removed session 12. May 17 00:44:10.979389 env[1335]: time="2025-05-17T00:44:10.978871238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:44:11.106835 env[1335]: time="2025-05-17T00:44:11.106743385Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:11.108546 env[1335]: time="2025-05-17T00:44:11.108447721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:11.108827 kubelet[2220]: E0517 00:44:11.108760 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:44:11.109445 kubelet[2220]: E0517 00:44:11.108834 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:44:11.109445 kubelet[2220]: E0517 00:44:11.108990 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247aae74375a41939121aaa0f0cbe7f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:11.111844 env[1335]: time="2025-05-17T00:44:11.111801926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:44:11.242576 env[1335]: time="2025-05-17T00:44:11.242195842Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:11.243952 env[1335]: time="2025-05-17T00:44:11.243786795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:11.244272 kubelet[2220]: E0517 00:44:11.244201 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:44:11.244390 kubelet[2220]: E0517 00:44:11.244294 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:44:11.244949 kubelet[2220]: E0517 00:44:11.244741 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:11.246239 kubelet[2220]: E0517 00:44:11.246137 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:44:11.540922 systemd[1]: run-containerd-runc-k8s.io-685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330-runc.SlARUD.mount: Deactivated successfully. May 17 00:44:13.744218 systemd[1]: run-containerd-runc-k8s.io-685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330-runc.1L9ato.mount: Deactivated successfully. May 17 00:44:15.217719 systemd[1]: Started sshd@12-10.128.0.56:22-139.178.89.65:40298.service. May 17 00:44:15.251471 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:44:15.251621 kernel: audit: type=1130 audit(1747442655.218:458): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.56:22-139.178.89.65:40298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:15.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.56:22-139.178.89.65:40298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:15.537000 audit[5325]: USER_ACCT pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.540963 sshd[5325]: Accepted publickey for core from 139.178.89.65 port 40298 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:15.569155 kernel: audit: type=1101 audit(1747442655.537:459): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.570602 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:15.568000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.591391 systemd-logind[1312]: New session 13 of user core. May 17 00:44:15.594555 systemd[1]: Started session-13.scope. May 17 00:44:15.600670 kernel: audit: type=1103 audit(1747442655.568:460): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.630478 kernel: audit: type=1006 audit(1747442655.568:461): pid=5325 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 May 17 00:44:15.630662 kernel: audit: type=1300 audit(1747442655.568:461): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc16ba8d0 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:15.568000 audit[5325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc16ba8d0 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:15.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:15.611000 audit[5325]: USER_START pid=5325 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.700009 kernel: audit: type=1327 audit(1747442655.568:461): proctitle=737368643A20636F7265205B707269765D May 17 00:44:15.700206 kernel: audit: type=1105 audit(1747442655.611:462): pid=5325 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.622000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.725478 kernel: audit: type=1103 audit(1747442655.622:463): pid=5329 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.934560 env[1335]: time="2025-05-17T00:44:15.934138204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:44:15.956738 sshd[5325]: pam_unix(sshd:session): session closed for user core May 17 00:44:15.957000 audit[5325]: USER_END pid=5325 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.964115 systemd-logind[1312]: Session 13 logged out. Waiting for processes to exit. May 17 00:44:15.970065 systemd[1]: sshd@12-10.128.0.56:22-139.178.89.65:40298.service: Deactivated successfully. May 17 00:44:15.971799 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:44:15.976905 systemd-logind[1312]: Removed session 13. May 17 00:44:15.992514 kernel: audit: type=1106 audit(1747442655.957:464): pid=5325 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.957000 audit[5325]: CRED_DISP pid=5325 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:16.030462 kernel: audit: type=1104 audit(1747442655.957:465): pid=5325 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:15.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.56:22-139.178.89.65:40298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:16.081699 env[1335]: time="2025-05-17T00:44:16.081590101Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:16.083812 env[1335]: time="2025-05-17T00:44:16.083735543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:16.084534 kubelet[2220]: E0517 00:44:16.084467 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:44:16.085253 kubelet[2220]: E0517 00:44:16.085196 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:44:16.085745 kubelet[2220]: E0517 00:44:16.085650 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zx976,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-4qrzt_calico-system(67d969c1-4d93-44a9-a00c-87eca6fdadfb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:16.087877 kubelet[2220]: E0517 00:44:16.087828 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:44:21.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.56:22-139.178.89.65:50172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:21.001591 systemd[1]: Started sshd@13-10.128.0.56:22-139.178.89.65:50172.service. May 17 00:44:21.007404 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:21.011630 kernel: audit: type=1130 audit(1747442661.001:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.56:22-139.178.89.65:50172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:21.311000 audit[5341]: USER_ACCT pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.342523 kernel: audit: type=1101 audit(1747442661.311:468): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.343588 sshd[5341]: Accepted publickey for core from 139.178.89.65 port 50172 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:21.344854 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:21.343000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.375506 kernel: audit: type=1103 audit(1747442661.343:469): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.382649 systemd[1]: Started session-14.scope. May 17 00:44:21.393561 systemd-logind[1312]: New session 14 of user core. May 17 00:44:21.394592 kernel: audit: type=1006 audit(1747442661.343:470): pid=5341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 17 00:44:21.343000 audit[5341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfb9b3d10 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:21.424540 kernel: audit: type=1300 audit(1747442661.343:470): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfb9b3d10 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:21.343000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:21.442449 kernel: audit: type=1327 audit(1747442661.343:470): proctitle=737368643A20636F7265205B707269765D May 17 00:44:21.448000 audit[5341]: USER_START pid=5341 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.483485 kernel: audit: type=1105 audit(1747442661.448:471): pid=5341 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.484000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.511501 kernel: audit: type=1103 audit(1747442661.484:472): pid=5344 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:21.909668 systemd[1]: run-containerd-runc-k8s.io-4a8846f4a61e0929895aa5693269ce1a469c915109549821327692773f69d840-runc.wthcJR.mount: Deactivated successfully. May 17 00:44:22.065867 sshd[5341]: pam_unix(sshd:session): session closed for user core May 17 00:44:22.067000 audit[5341]: USER_END pid=5341 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:22.101464 kernel: audit: type=1106 audit(1747442662.067:473): pid=5341 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:22.103106 systemd[1]: sshd@13-10.128.0.56:22-139.178.89.65:50172.service: Deactivated successfully. May 17 00:44:22.106310 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:44:22.107262 systemd-logind[1312]: Session 14 logged out. Waiting for processes to exit. May 17 00:44:22.109278 systemd-logind[1312]: Removed session 14. May 17 00:44:22.086000 audit[5341]: CRED_DISP pid=5341 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:22.148524 kernel: audit: type=1104 audit(1747442662.086:474): pid=5341 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:22.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.56:22-139.178.89.65:50172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:24.938947 kubelet[2220]: E0517 00:44:24.938890 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:44:27.112743 systemd[1]: Started sshd@14-10.128.0.56:22-139.178.89.65:41566.service. May 17 00:44:27.144889 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:27.145042 kernel: audit: type=1130 audit(1747442667.114:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.56:22-139.178.89.65:41566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:27.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.56:22-139.178.89.65:41566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:27.454000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.485121 sshd[5374]: Accepted publickey for core from 139.178.89.65 port 41566 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:27.485709 kernel: audit: type=1101 audit(1747442667.454:477): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.489731 sshd[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:27.504833 systemd-logind[1312]: New session 15 of user core. May 17 00:44:27.506834 systemd[1]: Started session-15.scope. May 17 00:44:27.488000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.546463 kernel: audit: type=1103 audit(1747442667.488:478): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.590458 kernel: audit: type=1006 audit(1747442667.488:479): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 00:44:27.488000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3eb8d4b0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:27.632514 kernel: audit: type=1300 audit(1747442667.488:479): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3eb8d4b0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:27.488000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:27.667516 kernel: audit: type=1327 audit(1747442667.488:479): proctitle=737368643A20636F7265205B707269765D May 17 00:44:27.550000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.709457 kernel: audit: type=1105 audit(1747442667.550:480): pid=5374 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.555000 audit[5377]: CRED_ACQ pid=5377 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:27.770469 kernel: audit: type=1103 audit(1747442667.555:481): pid=5377 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.006771 sshd[5374]: pam_unix(sshd:session): session closed for user core May 17 00:44:28.009000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.024164 systemd[1]: sshd@14-10.128.0.56:22-139.178.89.65:41566.service: Deactivated successfully. May 17 00:44:28.028329 systemd-logind[1312]: Session 15 logged out. Waiting for processes to exit. May 17 00:44:28.029805 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:44:28.032923 systemd-logind[1312]: Removed session 15. May 17 00:44:28.042827 kernel: audit: type=1106 audit(1747442668.009:482): pid=5374 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.016000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.067709 kernel: audit: type=1104 audit(1747442668.016:483): pid=5374 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.56:22-139.178.89.65:41566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:28.086037 systemd[1]: Started sshd@15-10.128.0.56:22-139.178.89.65:41572.service. May 17 00:44:28.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.56:22-139.178.89.65:41572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:28.385000 audit[5386]: USER_ACCT pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.387305 sshd[5386]: Accepted publickey for core from 139.178.89.65 port 41572 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:28.387000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.387000 audit[5386]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe95dbd0f0 a2=3 a3=0 items=0 ppid=1 pid=5386 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:28.387000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:28.388504 sshd[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:28.399060 systemd-logind[1312]: New session 16 of user core. May 17 00:44:28.399892 systemd[1]: Started session-16.scope. May 17 00:44:28.415000 audit[5386]: USER_START pid=5386 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.419000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.855736 sshd[5386]: pam_unix(sshd:session): session closed for user core May 17 00:44:28.857000 audit[5386]: USER_END pid=5386 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.857000 audit[5386]: CRED_DISP pid=5386 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:28.862293 systemd-logind[1312]: Session 16 logged out. Waiting for processes to exit. May 17 00:44:28.865599 systemd[1]: sshd@15-10.128.0.56:22-139.178.89.65:41572.service: Deactivated successfully. May 17 00:44:28.867055 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:44:28.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.56:22-139.178.89.65:41572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:28.870129 systemd-logind[1312]: Removed session 16. May 17 00:44:28.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.56:22-139.178.89.65:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:28.911104 systemd[1]: Started sshd@16-10.128.0.56:22-139.178.89.65:41578.service. May 17 00:44:28.935410 kubelet[2220]: E0517 00:44:28.935362 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:44:29.231000 audit[5397]: USER_ACCT pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:29.233703 sshd[5397]: Accepted publickey for core from 139.178.89.65 port 41578 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:29.235000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:29.235000 audit[5397]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd88b10720 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:29.235000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:29.236540 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:29.247330 systemd-logind[1312]: New session 17 of user core. May 17 00:44:29.249005 systemd[1]: Started session-17.scope. May 17 00:44:29.267000 audit[5397]: USER_START pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:29.270000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:32.891702 sshd[5397]: pam_unix(sshd:session): session closed for user core May 17 00:44:32.932093 kernel: kauditd_printk_skb: 20 callbacks suppressed May 17 00:44:32.932245 kernel: audit: type=1106 audit(1747442672.892:500): pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:32.892000 audit[5397]: USER_END pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:32.898458 systemd-logind[1312]: Session 17 logged out. Waiting for processes to exit. May 17 00:44:32.905206 systemd[1]: sshd@16-10.128.0.56:22-139.178.89.65:41578.service: Deactivated successfully. May 17 00:44:32.907761 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:44:32.910034 systemd-logind[1312]: Removed session 17. May 17 00:44:32.953016 systemd[1]: Started sshd@17-10.128.0.56:22-139.178.89.65:41592.service. May 17 00:44:32.892000 audit[5397]: CRED_DISP pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:32.984523 kernel: audit: type=1104 audit(1747442672.892:501): pid=5397 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:33.013551 kernel: audit: type=1131 audit(1747442672.903:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.56:22-139.178.89.65:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:32.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.56:22-139.178.89.65:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:32.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.56:22-139.178.89.65:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:33.042514 kernel: audit: type=1130 audit(1747442672.953:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.56:22-139.178.89.65:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:33.045000 audit[5414]: NETFILTER_CFG table=filter:130 family=2 entries=12 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:33.098312 kernel: audit: type=1325 audit(1747442673.045:504): table=filter:130 family=2 entries=12 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:33.098513 kernel: audit: type=1300 audit(1747442673.045:504): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe8dd5c9f0 a2=0 a3=7ffe8dd5c9dc items=0 ppid=2367 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.045000 audit[5414]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe8dd5c9f0 a2=0 a3=7ffe8dd5c9dc items=0 ppid=2367 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:33.129467 kernel: audit: type=1327 audit(1747442673.045:504): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:33.251000 audit[5414]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:33.275621 kernel: audit: type=1325 audit(1747442673.251:505): table=nat:131 family=2 entries=22 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:33.251000 audit[5414]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe8dd5c9f0 a2=0 a3=7ffe8dd5c9dc items=0 ppid=2367 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.313178 kernel: audit: type=1300 audit(1747442673.251:505): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe8dd5c9f0 a2=0 a3=7ffe8dd5c9dc items=0 ppid=2367 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.320316 sshd[5413]: Accepted publickey for core from 139.178.89.65 port 41592 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:33.323639 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:33.337732 systemd[1]: Started session-18.scope. May 17 00:44:33.339531 systemd-logind[1312]: New session 18 of user core. May 17 00:44:33.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:33.377516 kernel: audit: type=1327 audit(1747442673.251:505): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:33.317000 audit[5413]: USER_ACCT pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:33.319000 audit[5413]: CRED_ACQ pid=5413 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:33.319000 audit[5413]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8a2c6ed0 a2=3 a3=0 items=0 ppid=1 pid=5413 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.319000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:33.377000 audit[5413]: USER_START pid=5413 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:33.381000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:33.399000 audit[5419]: NETFILTER_CFG table=filter:132 family=2 entries=24 op=nft_register_rule pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:33.399000 audit[5419]: SYSCALL arch=c000003e syscall=46 success=yes exit=13432 a0=3 a1=7ffe60c41690 a2=0 a3=7ffe60c4167c items=0 ppid=2367 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:33.406000 audit[5419]: NETFILTER_CFG table=nat:133 family=2 entries=22 op=nft_register_rule pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:33.406000 audit[5419]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe60c41690 a2=0 a3=0 items=0 ppid=2367 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:33.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:34.010474 sshd[5413]: pam_unix(sshd:session): session closed for user core May 17 00:44:34.011000 audit[5413]: USER_END pid=5413 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.011000 audit[5413]: CRED_DISP pid=5413 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.016191 systemd-logind[1312]: Session 18 logged out. Waiting for processes to exit. May 17 00:44:34.018644 systemd[1]: sshd@17-10.128.0.56:22-139.178.89.65:41592.service: Deactivated successfully. May 17 00:44:34.020137 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:44:34.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.56:22-139.178.89.65:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:34.023225 systemd-logind[1312]: Removed session 18. May 17 00:44:34.054373 systemd[1]: Started sshd@18-10.128.0.56:22-139.178.89.65:41606.service. May 17 00:44:34.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.56:22-139.178.89.65:41606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:34.364000 audit[5427]: USER_ACCT pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.367877 sshd[5427]: Accepted publickey for core from 139.178.89.65 port 41606 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:34.367000 audit[5427]: CRED_ACQ pid=5427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.367000 audit[5427]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff48e30c80 a2=3 a3=0 items=0 ppid=1 pid=5427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:34.367000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:34.370321 sshd[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:34.380331 systemd[1]: Started session-19.scope. May 17 00:44:34.381472 systemd-logind[1312]: New session 19 of user core. May 17 00:44:34.401000 audit[5427]: USER_START pid=5427 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.404000 audit[5430]: CRED_ACQ pid=5430 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.796790 sshd[5427]: pam_unix(sshd:session): session closed for user core May 17 00:44:34.798000 audit[5427]: USER_END pid=5427 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.799000 audit[5427]: CRED_DISP pid=5427 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:34.804034 systemd[1]: sshd@18-10.128.0.56:22-139.178.89.65:41606.service: Deactivated successfully. May 17 00:44:34.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.56:22-139.178.89.65:41606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:34.806795 systemd-logind[1312]: Session 19 logged out. Waiting for processes to exit. May 17 00:44:34.807349 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:44:34.810286 systemd-logind[1312]: Removed session 19. May 17 00:44:36.753646 update_engine[1319]: I0517 00:44:36.753587 1319 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:44:36.753646 update_engine[1319]: I0517 00:44:36.753654 1319 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.754802 1319 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.755621 1319 omaha_request_params.cc:62] Current group set to lts May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.755846 1319 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.755860 1319 update_attempter.cc:643] Scheduling an action processor start. May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.755892 1319 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.755937 1319 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.756040 1319 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.756050 1319 omaha_request_action.cc:271] Request: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: May 17 00:44:36.756563 update_engine[1319]: I0517 00:44:36.756060 1319 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:44:36.759632 update_engine[1319]: I0517 00:44:36.759266 1319 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:44:36.759632 update_engine[1319]: I0517 00:44:36.759583 1319 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:44:36.760248 locksmithd[1375]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:44:36.843589 update_engine[1319]: E0517 00:44:36.842932 1319 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:44:36.843589 update_engine[1319]: I0517 00:44:36.843117 1319 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:44:39.869505 kernel: kauditd_printk_skb: 27 callbacks suppressed May 17 00:44:39.869708 kernel: audit: type=1130 audit(1747442679.847:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.56:22-139.178.89.65:46452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:39.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.56:22-139.178.89.65:46452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:39.848620 systemd[1]: Started sshd@19-10.128.0.56:22-139.178.89.65:46452.service. May 17 00:44:39.937761 kubelet[2220]: E0517 00:44:39.937637 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:44:40.180000 audit[5440]: USER_ACCT pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.184611 sshd[5440]: Accepted publickey for core from 139.178.89.65 port 46452 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:40.212485 kernel: audit: type=1101 audit(1747442680.180:526): pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.213393 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:40.222865 systemd-logind[1312]: New session 20 of user core. May 17 00:44:40.225282 systemd[1]: Started session-20.scope. May 17 00:44:40.210000 audit[5440]: CRED_ACQ pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.289362 kernel: audit: type=1103 audit(1747442680.210:527): pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.289553 kernel: audit: type=1006 audit(1747442680.211:528): pid=5440 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 May 17 00:44:40.211000 audit[5440]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcafc85e60 a2=3 a3=0 items=0 ppid=1 pid=5440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:40.318460 kernel: audit: type=1300 audit(1747442680.211:528): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcafc85e60 a2=3 a3=0 items=0 ppid=1 pid=5440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:40.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:40.237000 audit[5440]: USER_START pid=5440 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.361632 kernel: audit: type=1327 audit(1747442680.211:528): proctitle=737368643A20636F7265205B707269765D May 17 00:44:40.361787 kernel: audit: type=1105 audit(1747442680.237:529): pid=5440 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.241000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.386342 kernel: audit: type=1103 audit(1747442680.241:530): pid=5443 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.691000 audit[5452]: NETFILTER_CFG table=filter:134 family=2 entries=24 op=nft_register_rule pid=5452 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:40.710485 kernel: audit: type=1325 audit(1747442680.691:531): table=filter:134 family=2 entries=24 op=nft_register_rule pid=5452 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:40.691000 audit[5452]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffde9bc3cb0 a2=0 a3=7ffde9bc3c9c items=0 ppid=2367 pid=5452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:40.744455 kernel: audit: type=1300 audit(1747442680.691:531): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffde9bc3cb0 a2=0 a3=7ffde9bc3c9c items=0 ppid=2367 pid=5452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:40.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:40.718000 audit[5452]: NETFILTER_CFG table=nat:135 family=2 entries=106 op=nft_register_chain pid=5452 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:44:40.718000 audit[5452]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffde9bc3cb0 a2=0 a3=7ffde9bc3c9c items=0 ppid=2367 pid=5452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:40.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:44:40.749227 sshd[5440]: pam_unix(sshd:session): session closed for user core May 17 00:44:40.750000 audit[5440]: USER_END pid=5440 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.750000 audit[5440]: CRED_DISP pid=5440 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:40.756021 systemd-logind[1312]: Session 20 logged out. Waiting for processes to exit. May 17 00:44:40.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.56:22-139.178.89.65:46452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:40.757945 systemd[1]: sshd@19-10.128.0.56:22-139.178.89.65:46452.service: Deactivated successfully. May 17 00:44:40.759472 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:44:40.764963 systemd-logind[1312]: Removed session 20. May 17 00:44:40.936381 kubelet[2220]: E0517 00:44:40.934671 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:44:43.726090 systemd[1]: run-containerd-runc-k8s.io-685c56b04208caa766b15202e9c4cafd9bae5134acf83397a2dad7ae810d9330-runc.Wv8IEY.mount: Deactivated successfully. May 17 00:44:45.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.56:22-139.178.89.65:46454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:45.772478 systemd[1]: Started sshd@20-10.128.0.56:22-139.178.89.65:46454.service. May 17 00:44:45.788463 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:44:45.788692 kernel: audit: type=1130 audit(1747442685.771:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.56:22-139.178.89.65:46454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:46.096000 audit[5479]: USER_ACCT pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.127410 sshd[5479]: Accepted publickey for core from 139.178.89.65 port 46454 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:46.128043 kernel: audit: type=1101 audit(1747442686.096:537): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.130927 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:46.149059 systemd[1]: Started session-21.scope. May 17 00:44:46.150530 systemd-logind[1312]: New session 21 of user core. May 17 00:44:46.128000 audit[5479]: CRED_ACQ pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.191459 kernel: audit: type=1103 audit(1747442686.128:538): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.208455 kernel: audit: type=1006 audit(1747442686.128:539): pid=5479 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 17 00:44:46.128000 audit[5479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb8d72590 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:46.238462 kernel: audit: type=1300 audit(1747442686.128:539): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb8d72590 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:46.128000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:46.162000 audit[5479]: USER_START pid=5479 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.280606 kernel: audit: type=1327 audit(1747442686.128:539): proctitle=737368643A20636F7265205B707269765D May 17 00:44:46.280816 kernel: audit: type=1105 audit(1747442686.162:540): pid=5479 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.288451 kernel: audit: type=1103 audit(1747442686.165:541): pid=5487 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.165000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.522000 audit[5479]: USER_END pid=5479 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.522587 sshd[5479]: pam_unix(sshd:session): session closed for user core May 17 00:44:46.533876 systemd-logind[1312]: Session 21 logged out. Waiting for processes to exit. May 17 00:44:46.536548 systemd[1]: sshd@20-10.128.0.56:22-139.178.89.65:46454.service: Deactivated successfully. May 17 00:44:46.538294 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:44:46.542362 systemd-logind[1312]: Removed session 21. May 17 00:44:46.557571 kernel: audit: type=1106 audit(1747442686.522:542): pid=5479 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.523000 audit[5479]: CRED_DISP pid=5479 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.584612 kernel: audit: type=1104 audit(1747442686.523:543): pid=5479 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:46.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.56:22-139.178.89.65:46454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:46.752579 update_engine[1319]: I0517 00:44:46.752508 1319 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:44:46.753286 update_engine[1319]: I0517 00:44:46.752897 1319 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:44:46.753286 update_engine[1319]: I0517 00:44:46.753166 1319 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:44:46.762163 update_engine[1319]: E0517 00:44:46.762110 1319 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:44:46.762391 update_engine[1319]: I0517 00:44:46.762287 1319 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:44:51.571563 systemd[1]: Started sshd@21-10.128.0.56:22-139.178.89.65:57522.service. May 17 00:44:51.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.56:22-139.178.89.65:57522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:51.581792 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:51.581972 kernel: audit: type=1130 audit(1747442691.571:545): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.56:22-139.178.89.65:57522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:51.899000 audit[5501]: USER_ACCT pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:51.903402 systemd[1]: run-containerd-runc-k8s.io-4a8846f4a61e0929895aa5693269ce1a469c915109549821327692773f69d840-runc.wyZyGp.mount: Deactivated successfully. May 17 00:44:51.931590 kernel: audit: type=1101 audit(1747442691.899:546): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:51.943468 sshd[5501]: Accepted publickey for core from 139.178.89.65 port 57522 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:51.940618 sshd[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:51.939000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:51.968065 env[1335]: time="2025-05-17T00:44:51.961004966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:44:51.975866 systemd[1]: Started session-22.scope. May 17 00:44:51.986612 systemd-logind[1312]: New session 22 of user core. May 17 00:44:51.992461 kernel: audit: type=1103 audit(1747442691.939:547): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.029510 kernel: audit: type=1006 audit(1747442691.939:548): pid=5501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 17 00:44:51.939000 audit[5501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbb76b4d0 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:52.071457 kernel: audit: type=1300 audit(1747442691.939:548): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbb76b4d0 a2=3 a3=0 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:52.097584 kernel: audit: type=1327 audit(1747442691.939:548): proctitle=737368643A20636F7265205B707269765D May 17 00:44:51.939000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:52.031000 audit[5501]: USER_START pid=5501 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.147274 env[1335]: time="2025-05-17T00:44:52.147186311Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:52.147528 kernel: audit: type=1105 audit(1747442692.031:549): pid=5501 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.150416 env[1335]: time="2025-05-17T00:44:52.150346656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:52.150704 kubelet[2220]: E0517 00:44:52.150646 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:44:52.151312 kubelet[2220]: E0517 00:44:52.150726 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:44:52.151312 kubelet[2220]: E0517 00:44:52.150891 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247aae74375a41939121aaa0f0cbe7f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:52.035000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.165844 env[1335]: time="2025-05-17T00:44:52.154154363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:44:52.182520 kernel: audit: type=1103 audit(1747442692.035:550): pid=5523 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.323954 env[1335]: time="2025-05-17T00:44:52.323864451Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:44:52.327691 env[1335]: time="2025-05-17T00:44:52.327611294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:44:52.328232 kubelet[2220]: E0517 00:44:52.328169 2220 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:44:52.328387 kubelet[2220]: E0517 00:44:52.328250 2220 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:44:52.328506 kubelet[2220]: E0517 00:44:52.328419 2220 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4w8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-59cd79cdc-md8tk_calico-system(b432bbba-2503-4442-8976-021c08969eff): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:44:52.330102 kubelet[2220]: E0517 00:44:52.330037 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-59cd79cdc-md8tk" podUID="b432bbba-2503-4442-8976-021c08969eff" May 17 00:44:52.455857 sshd[5501]: pam_unix(sshd:session): session closed for user core May 17 00:44:52.458000 audit[5501]: USER_END pid=5501 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.492461 kernel: audit: type=1106 audit(1747442692.458:551): pid=5501 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.495883 systemd-logind[1312]: Session 22 logged out. Waiting for processes to exit. May 17 00:44:52.499015 systemd[1]: sshd@21-10.128.0.56:22-139.178.89.65:57522.service: Deactivated successfully. May 17 00:44:52.500678 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:44:52.502350 systemd-logind[1312]: Removed session 22. May 17 00:44:52.491000 audit[5501]: CRED_DISP pid=5501 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.536543 kernel: audit: type=1104 audit(1747442692.491:552): pid=5501 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:52.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.56:22-139.178.89.65:57522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:52.939194 kubelet[2220]: E0517 00:44:52.935966 2220 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-4qrzt" podUID="67d969c1-4d93-44a9-a00c-87eca6fdadfb" May 17 00:44:56.755762 update_engine[1319]: I0517 00:44:56.755628 1319 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:44:56.756471 update_engine[1319]: I0517 00:44:56.756019 1319 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:44:56.756471 update_engine[1319]: I0517 00:44:56.756366 1319 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:44:56.764690 update_engine[1319]: E0517 00:44:56.764645 1319 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:44:56.764890 update_engine[1319]: I0517 00:44:56.764804 1319 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:44:57.533012 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:44:57.533173 kernel: audit: type=1130 audit(1747442697.502:554): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.56:22-139.178.89.65:52804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:57.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.56:22-139.178.89.65:52804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:44:57.502331 systemd[1]: Started sshd@22-10.128.0.56:22-139.178.89.65:52804.service. May 17 00:44:57.821000 audit[5540]: USER_ACCT pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:57.851760 kernel: audit: type=1101 audit(1747442697.821:555): pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:57.854056 sshd[5540]: Accepted publickey for core from 139.178.89.65 port 52804 ssh2: RSA SHA256:jyE3lnafiBGDGJK6dHnApyF/jgfCnjVgkPORJQqM9Ps May 17 00:44:57.856142 sshd[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:57.854000 audit[5540]: CRED_ACQ pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:57.883456 kernel: audit: type=1103 audit(1747442697.854:556): pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:57.902459 kernel: audit: type=1006 audit(1747442697.855:557): pid=5540 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 17 00:44:57.906727 systemd-logind[1312]: New session 23 of user core. May 17 00:44:57.913819 systemd[1]: Started session-23.scope. May 17 00:44:57.855000 audit[5540]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff195b0250 a2=3 a3=0 items=0 ppid=1 pid=5540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:57.969458 kernel: audit: type=1300 audit(1747442697.855:557): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff195b0250 a2=3 a3=0 items=0 ppid=1 pid=5540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:44:57.855000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:44:57.930000 audit[5540]: USER_START pid=5540 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.013973 kernel: audit: type=1327 audit(1747442697.855:557): proctitle=737368643A20636F7265205B707269765D May 17 00:44:58.014187 kernel: audit: type=1105 audit(1747442697.930:558): pid=5540 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:57.934000 audit[5543]: CRED_ACQ pid=5543 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.039473 kernel: audit: type=1103 audit(1747442697.934:559): pid=5543 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.276519 sshd[5540]: pam_unix(sshd:session): session closed for user core May 17 00:44:58.278000 audit[5540]: USER_END pid=5540 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.313463 kernel: audit: type=1106 audit(1747442698.278:560): pid=5540 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.317709 systemd[1]: sshd@22-10.128.0.56:22-139.178.89.65:52804.service: Deactivated successfully. May 17 00:44:58.319214 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:44:58.319496 systemd-logind[1312]: Session 23 logged out. Waiting for processes to exit. May 17 00:44:58.322965 systemd-logind[1312]: Removed session 23. May 17 00:44:58.278000 audit[5540]: CRED_DISP pid=5540 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.358468 kernel: audit: type=1104 audit(1747442698.278:561): pid=5540 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' May 17 00:44:58.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.56:22-139.178.89.65:52804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'