May 17 00:48:33.014993 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:48:33.015025 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri May 16 23:24:21 -00 2025 May 17 00:48:33.015034 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 17 00:48:33.015045 kernel: printk: bootconsole [pl11] enabled May 17 00:48:33.015050 kernel: efi: EFI v2.70 by EDK II May 17 00:48:33.015056 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 May 17 00:48:33.015062 kernel: random: crng init done May 17 00:48:33.015068 kernel: ACPI: Early table checksum verification disabled May 17 00:48:33.015073 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 17 00:48:33.015079 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015084 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015090 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 17 00:48:33.015097 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015103 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015110 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015116 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015122 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015129 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015135 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 17 00:48:33.015141 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:33.015146 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 17 00:48:33.015152 kernel: NUMA: Failed to initialise from firmware May 17 00:48:33.015158 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:48:33.015164 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] May 17 00:48:33.015169 kernel: Zone ranges: May 17 00:48:33.015175 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 17 00:48:33.015181 kernel: DMA32 empty May 17 00:48:33.015186 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:48:33.015193 kernel: Movable zone start for each node May 17 00:48:33.015199 kernel: Early memory node ranges May 17 00:48:33.015205 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 17 00:48:33.015211 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 17 00:48:33.015216 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 17 00:48:33.015229 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 17 00:48:33.015235 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 17 00:48:33.015241 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 17 00:48:33.015246 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:48:33.015252 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:48:33.015258 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 17 00:48:33.015264 kernel: psci: probing for conduit method from ACPI. May 17 00:48:33.015274 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:48:33.015280 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:48:33.015286 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:48:33.015292 kernel: psci: SMC Calling Convention v1.4 May 17 00:48:33.015298 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 May 17 00:48:33.015305 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 May 17 00:48:33.015311 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 17 00:48:33.015317 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 17 00:48:33.015324 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:48:33.015330 kernel: Detected PIPT I-cache on CPU0 May 17 00:48:33.015336 kernel: CPU features: detected: GIC system register CPU interface May 17 00:48:33.015342 kernel: CPU features: detected: Hardware dirty bit management May 17 00:48:33.015348 kernel: CPU features: detected: Spectre-BHB May 17 00:48:33.015354 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:48:33.015360 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:48:33.015366 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:48:33.015373 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 17 00:48:33.015379 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:48:33.015385 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 17 00:48:33.015391 kernel: Policy zone: Normal May 17 00:48:33.015398 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:48:33.015405 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:48:33.015411 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:48:33.015418 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:48:33.015424 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:48:33.015430 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) May 17 00:48:33.015436 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) May 17 00:48:33.015444 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:48:33.015450 kernel: trace event string verifier disabled May 17 00:48:33.015456 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:48:33.015464 kernel: rcu: RCU event tracing is enabled. May 17 00:48:33.015470 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:48:33.015476 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:48:33.015483 kernel: Tracing variant of Tasks RCU enabled. May 17 00:48:33.015489 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:48:33.015495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:48:33.015501 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:48:33.015507 kernel: GICv3: 960 SPIs implemented May 17 00:48:33.015514 kernel: GICv3: 0 Extended SPIs implemented May 17 00:48:33.015520 kernel: GICv3: Distributor has no Range Selector support May 17 00:48:33.015526 kernel: Root IRQ handler: gic_handle_irq May 17 00:48:33.015532 kernel: GICv3: 16 PPIs implemented May 17 00:48:33.015538 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 17 00:48:33.015544 kernel: ITS: No ITS available, not enabling LPIs May 17 00:48:33.015551 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:48:33.015557 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:48:33.015563 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:48:33.015569 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:48:33.015575 kernel: Console: colour dummy device 80x25 May 17 00:48:33.015583 kernel: printk: console [tty1] enabled May 17 00:48:33.015589 kernel: ACPI: Core revision 20210730 May 17 00:48:33.015596 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:48:33.015602 kernel: pid_max: default: 32768 minimum: 301 May 17 00:48:33.015609 kernel: LSM: Security Framework initializing May 17 00:48:33.015615 kernel: SELinux: Initializing. May 17 00:48:33.015621 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:48:33.015628 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:48:33.015634 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 17 00:48:33.015642 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 17 00:48:33.015649 kernel: rcu: Hierarchical SRCU implementation. May 17 00:48:33.015655 kernel: Remapping and enabling EFI services. May 17 00:48:33.015661 kernel: smp: Bringing up secondary CPUs ... May 17 00:48:33.015667 kernel: Detected PIPT I-cache on CPU1 May 17 00:48:33.015674 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 17 00:48:33.015680 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:48:33.015686 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:48:33.015693 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:48:33.015699 kernel: SMP: Total of 2 processors activated. May 17 00:48:33.015707 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:48:33.015713 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 17 00:48:33.015720 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:48:33.015726 kernel: CPU features: detected: CRC32 instructions May 17 00:48:33.015732 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:48:33.015739 kernel: CPU features: detected: LSE atomic instructions May 17 00:48:33.015745 kernel: CPU features: detected: Privileged Access Never May 17 00:48:33.015751 kernel: CPU: All CPU(s) started at EL1 May 17 00:48:33.015758 kernel: alternatives: patching kernel code May 17 00:48:33.015766 kernel: devtmpfs: initialized May 17 00:48:33.015777 kernel: KASLR enabled May 17 00:48:33.015784 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:48:33.015792 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:48:33.015798 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:48:33.015805 kernel: SMBIOS 3.1.0 present. May 17 00:48:33.015812 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 17 00:48:33.015819 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:48:33.015826 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:48:33.015834 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:48:33.015841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:48:33.015848 kernel: audit: initializing netlink subsys (disabled) May 17 00:48:33.015855 kernel: audit: type=2000 audit(0.088:1): state=initialized audit_enabled=0 res=1 May 17 00:48:33.015861 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:48:33.015868 kernel: cpuidle: using governor menu May 17 00:48:33.015875 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:48:33.015883 kernel: ASID allocator initialised with 32768 entries May 17 00:48:33.015889 kernel: ACPI: bus type PCI registered May 17 00:48:33.017322 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:48:33.017335 kernel: Serial: AMBA PL011 UART driver May 17 00:48:33.017342 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:48:33.017349 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:48:33.017356 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:48:33.017363 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:48:33.017369 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:48:33.017380 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:48:33.017387 kernel: ACPI: Added _OSI(Module Device) May 17 00:48:33.017394 kernel: ACPI: Added _OSI(Processor Device) May 17 00:48:33.017401 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:48:33.017407 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:48:33.017414 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:48:33.017421 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:48:33.017428 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:48:33.017434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:48:33.017442 kernel: ACPI: Interpreter enabled May 17 00:48:33.017449 kernel: ACPI: Using GIC for interrupt routing May 17 00:48:33.017455 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 17 00:48:33.017462 kernel: printk: console [ttyAMA0] enabled May 17 00:48:33.017468 kernel: printk: bootconsole [pl11] disabled May 17 00:48:33.017475 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 17 00:48:33.017482 kernel: iommu: Default domain type: Translated May 17 00:48:33.017488 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:48:33.017495 kernel: vgaarb: loaded May 17 00:48:33.017501 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:48:33.017509 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:48:33.017516 kernel: PTP clock support registered May 17 00:48:33.017522 kernel: Registered efivars operations May 17 00:48:33.017529 kernel: No ACPI PMU IRQ for CPU0 May 17 00:48:33.017535 kernel: No ACPI PMU IRQ for CPU1 May 17 00:48:33.017542 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:48:33.017549 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:48:33.017555 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:48:33.017563 kernel: pnp: PnP ACPI init May 17 00:48:33.017570 kernel: pnp: PnP ACPI: found 0 devices May 17 00:48:33.017576 kernel: NET: Registered PF_INET protocol family May 17 00:48:33.017583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:48:33.017590 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:48:33.017597 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:48:33.017604 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:48:33.017610 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:48:33.017617 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:48:33.017625 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:48:33.017632 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:48:33.017639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:48:33.017646 kernel: PCI: CLS 0 bytes, default 64 May 17 00:48:33.017653 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 17 00:48:33.017659 kernel: kvm [1]: HYP mode not available May 17 00:48:33.017666 kernel: Initialise system trusted keyrings May 17 00:48:33.017672 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:48:33.017679 kernel: Key type asymmetric registered May 17 00:48:33.017686 kernel: Asymmetric key parser 'x509' registered May 17 00:48:33.017693 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:48:33.017700 kernel: io scheduler mq-deadline registered May 17 00:48:33.017706 kernel: io scheduler kyber registered May 17 00:48:33.017713 kernel: io scheduler bfq registered May 17 00:48:33.017719 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:48:33.017726 kernel: thunder_xcv, ver 1.0 May 17 00:48:33.017733 kernel: thunder_bgx, ver 1.0 May 17 00:48:33.017739 kernel: nicpf, ver 1.0 May 17 00:48:33.017746 kernel: nicvf, ver 1.0 May 17 00:48:33.017882 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:48:33.017963 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:48:32 UTC (1747442912) May 17 00:48:33.017973 kernel: efifb: probing for efifb May 17 00:48:33.017980 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:48:33.017986 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:48:33.017993 kernel: efifb: scrolling: redraw May 17 00:48:33.018000 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:48:33.018009 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:48:33.018015 kernel: fb0: EFI VGA frame buffer device May 17 00:48:33.018022 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 17 00:48:33.018029 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:48:33.018036 kernel: NET: Registered PF_INET6 protocol family May 17 00:48:33.018043 kernel: Segment Routing with IPv6 May 17 00:48:33.018049 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:48:33.018055 kernel: NET: Registered PF_PACKET protocol family May 17 00:48:33.018062 kernel: Key type dns_resolver registered May 17 00:48:33.018068 kernel: registered taskstats version 1 May 17 00:48:33.018076 kernel: Loading compiled-in X.509 certificates May 17 00:48:33.018084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 2fa973ae674d09a62938b8c6a2b9446b5340adb7' May 17 00:48:33.018090 kernel: Key type .fscrypt registered May 17 00:48:33.018096 kernel: Key type fscrypt-provisioning registered May 17 00:48:33.018103 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:48:33.018110 kernel: ima: Allocated hash algorithm: sha1 May 17 00:48:33.018116 kernel: ima: No architecture policies found May 17 00:48:33.018123 kernel: clk: Disabling unused clocks May 17 00:48:33.018130 kernel: Freeing unused kernel memory: 36416K May 17 00:48:33.018137 kernel: Run /init as init process May 17 00:48:33.018144 kernel: with arguments: May 17 00:48:33.018150 kernel: /init May 17 00:48:33.018157 kernel: with environment: May 17 00:48:33.018163 kernel: HOME=/ May 17 00:48:33.018170 kernel: TERM=linux May 17 00:48:33.018176 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:48:33.018185 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:48:33.018195 systemd[1]: Detected virtualization microsoft. May 17 00:48:33.018202 systemd[1]: Detected architecture arm64. May 17 00:48:33.018209 systemd[1]: Running in initrd. May 17 00:48:33.018216 systemd[1]: No hostname configured, using default hostname. May 17 00:48:33.018223 systemd[1]: Hostname set to . May 17 00:48:33.018230 systemd[1]: Initializing machine ID from random generator. May 17 00:48:33.018238 systemd[1]: Queued start job for default target initrd.target. May 17 00:48:33.018246 systemd[1]: Started systemd-ask-password-console.path. May 17 00:48:33.018253 systemd[1]: Reached target cryptsetup.target. May 17 00:48:33.018259 systemd[1]: Reached target paths.target. May 17 00:48:33.018266 systemd[1]: Reached target slices.target. May 17 00:48:33.018273 systemd[1]: Reached target swap.target. May 17 00:48:33.018280 systemd[1]: Reached target timers.target. May 17 00:48:33.018288 systemd[1]: Listening on iscsid.socket. May 17 00:48:33.018295 systemd[1]: Listening on iscsiuio.socket. May 17 00:48:33.018303 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:48:33.018310 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:48:33.018317 systemd[1]: Listening on systemd-journald.socket. May 17 00:48:33.018325 systemd[1]: Listening on systemd-networkd.socket. May 17 00:48:33.018332 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:48:33.018339 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:48:33.018346 systemd[1]: Reached target sockets.target. May 17 00:48:33.018353 systemd[1]: Starting kmod-static-nodes.service... May 17 00:48:33.018360 systemd[1]: Finished network-cleanup.service. May 17 00:48:33.018368 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:48:33.018375 systemd[1]: Starting systemd-journald.service... May 17 00:48:33.018383 systemd[1]: Starting systemd-modules-load.service... May 17 00:48:33.018389 systemd[1]: Starting systemd-resolved.service... May 17 00:48:33.018401 systemd-journald[276]: Journal started May 17 00:48:33.018442 systemd-journald[276]: Runtime Journal (/run/log/journal/6154c9fe5b5f4701bdcec447e6bb0f89) is 8.0M, max 78.5M, 70.5M free. May 17 00:48:33.009289 systemd-modules-load[277]: Inserted module 'overlay' May 17 00:48:33.048929 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:48:33.048977 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:48:33.045286 systemd-resolved[278]: Positive Trust Anchors: May 17 00:48:33.069277 kernel: Bridge firewalling registered May 17 00:48:33.069299 systemd[1]: Started systemd-journald.service. May 17 00:48:33.045294 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:48:33.084662 kernel: SCSI subsystem initialized May 17 00:48:33.045321 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:48:33.166067 kernel: audit: type=1130 audit(1747442913.121:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.166091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:48:33.166101 kernel: device-mapper: uevent: version 1.0.3 May 17 00:48:33.166109 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:48:33.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.047423 systemd-resolved[278]: Defaulting to hostname 'linux'. May 17 00:48:33.190170 kernel: audit: type=1130 audit(1747442913.169:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.064395 systemd-modules-load[277]: Inserted module 'br_netfilter' May 17 00:48:33.218500 kernel: audit: type=1130 audit(1747442913.193:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.140507 systemd[1]: Started systemd-resolved.service. May 17 00:48:33.168670 systemd-modules-load[277]: Inserted module 'dm_multipath' May 17 00:48:33.247042 kernel: audit: type=1130 audit(1747442913.218:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.170242 systemd[1]: Finished kmod-static-nodes.service. May 17 00:48:33.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.194283 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:48:33.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.218910 systemd[1]: Finished systemd-modules-load.service. May 17 00:48:33.306917 kernel: audit: type=1130 audit(1747442913.246:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.306940 kernel: audit: type=1130 audit(1747442913.251:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.247002 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:48:33.252145 systemd[1]: Reached target nss-lookup.target. May 17 00:48:33.281811 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:48:33.306320 systemd[1]: Starting systemd-sysctl.service... May 17 00:48:33.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.321659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:48:33.332006 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:48:33.392635 kernel: audit: type=1130 audit(1747442913.340:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.392658 kernel: audit: type=1130 audit(1747442913.371:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.361876 systemd[1]: Finished systemd-sysctl.service. May 17 00:48:33.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.371814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:48:33.425208 kernel: audit: type=1130 audit(1747442913.397:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.401983 systemd[1]: Starting dracut-cmdline.service... May 17 00:48:33.438841 dracut-cmdline[298]: dracut-dracut-053 May 17 00:48:33.442894 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:48:33.503917 kernel: Loading iSCSI transport class v2.0-870. May 17 00:48:33.519933 kernel: iscsi: registered transport (tcp) May 17 00:48:33.540456 kernel: iscsi: registered transport (qla4xxx) May 17 00:48:33.540515 kernel: QLogic iSCSI HBA Driver May 17 00:48:33.575923 systemd[1]: Finished dracut-cmdline.service. May 17 00:48:33.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:33.584374 systemd[1]: Starting dracut-pre-udev.service... May 17 00:48:33.634919 kernel: raid6: neonx8 gen() 13729 MB/s May 17 00:48:33.654908 kernel: raid6: neonx8 xor() 10751 MB/s May 17 00:48:33.674911 kernel: raid6: neonx4 gen() 13536 MB/s May 17 00:48:33.695909 kernel: raid6: neonx4 xor() 11320 MB/s May 17 00:48:33.715908 kernel: raid6: neonx2 gen() 13015 MB/s May 17 00:48:33.735908 kernel: raid6: neonx2 xor() 10542 MB/s May 17 00:48:33.756908 kernel: raid6: neonx1 gen() 10625 MB/s May 17 00:48:33.776907 kernel: raid6: neonx1 xor() 8781 MB/s May 17 00:48:33.797908 kernel: raid6: int64x8 gen() 6257 MB/s May 17 00:48:33.818908 kernel: raid6: int64x8 xor() 3542 MB/s May 17 00:48:33.839907 kernel: raid6: int64x4 gen() 7249 MB/s May 17 00:48:33.860907 kernel: raid6: int64x4 xor() 3856 MB/s May 17 00:48:33.881908 kernel: raid6: int64x2 gen() 6153 MB/s May 17 00:48:33.901907 kernel: raid6: int64x2 xor() 3320 MB/s May 17 00:48:33.921908 kernel: raid6: int64x1 gen() 5046 MB/s May 17 00:48:33.947381 kernel: raid6: int64x1 xor() 2646 MB/s May 17 00:48:33.947392 kernel: raid6: using algorithm neonx8 gen() 13729 MB/s May 17 00:48:33.947400 kernel: raid6: .... xor() 10751 MB/s, rmw enabled May 17 00:48:33.951561 kernel: raid6: using neon recovery algorithm May 17 00:48:33.973116 kernel: xor: measuring software checksum speed May 17 00:48:33.973126 kernel: 8regs : 17202 MB/sec May 17 00:48:33.977026 kernel: 32regs : 20697 MB/sec May 17 00:48:33.980970 kernel: arm64_neon : 27700 MB/sec May 17 00:48:33.980980 kernel: xor: using function: arm64_neon (27700 MB/sec) May 17 00:48:34.041914 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 17 00:48:34.051992 systemd[1]: Finished dracut-pre-udev.service. May 17 00:48:34.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.059000 audit: BPF prog-id=7 op=LOAD May 17 00:48:34.059000 audit: BPF prog-id=8 op=LOAD May 17 00:48:34.061085 systemd[1]: Starting systemd-udevd.service... May 17 00:48:34.079300 systemd-udevd[474]: Using default interface naming scheme 'v252'. May 17 00:48:34.086218 systemd[1]: Started systemd-udevd.service. May 17 00:48:34.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.097893 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:48:34.112573 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation May 17 00:48:34.144718 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:48:34.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.150202 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:48:34.190129 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:48:34.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.244940 kernel: hv_vmbus: Vmbus version:5.3 May 17 00:48:34.253928 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:48:34.253975 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:48:34.275702 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:48:34.275753 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:48:34.282111 kernel: scsi host0: storvsc_host_t May 17 00:48:34.298169 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:48:34.298214 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:48:34.298224 kernel: scsi host1: storvsc_host_t May 17 00:48:34.298257 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:48:34.304482 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:48:34.315911 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:48:34.341512 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:48:34.342460 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:48:34.342474 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:48:34.359311 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:48:34.387353 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:48:34.387486 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:48:34.387574 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:48:34.387672 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:48:34.387756 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:34.387766 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:48:34.443664 kernel: hv_netvsc 000d3afc-9186-000d-3afc-9186000d3afc eth0: VF slot 1 added May 17 00:48:34.452929 kernel: hv_vmbus: registering driver hv_pci May 17 00:48:34.459926 kernel: hv_pci 0f74735e-cb90-4d4d-ade3-96adcdfd7c66: PCI VMBus probing: Using version 0x10004 May 17 00:48:34.764235 kernel: hv_pci 0f74735e-cb90-4d4d-ade3-96adcdfd7c66: PCI host bridge to bus cb90:00 May 17 00:48:34.764366 kernel: pci_bus cb90:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 17 00:48:34.764478 kernel: pci_bus cb90:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:48:34.764554 kernel: pci cb90:00:02.0: [15b3:1018] type 00 class 0x020000 May 17 00:48:34.764656 kernel: pci cb90:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:48:34.764739 kernel: pci cb90:00:02.0: enabling Extended Tags May 17 00:48:34.764817 kernel: pci cb90:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cb90:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 17 00:48:34.764894 kernel: pci_bus cb90:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:48:34.764998 kernel: pci cb90:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:48:34.801917 kernel: mlx5_core cb90:00:02.0: firmware version: 16.31.2424 May 17 00:48:35.132113 kernel: mlx5_core cb90:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) May 17 00:48:35.132230 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (538) May 17 00:48:35.132240 kernel: hv_netvsc 000d3afc-9186-000d-3afc-9186000d3afc eth0: VF registering: eth1 May 17 00:48:35.132322 kernel: mlx5_core cb90:00:02.0 eth1: joined to eth0 May 17 00:48:35.053409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:48:35.064217 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:48:35.151919 kernel: mlx5_core cb90:00:02.0 enP52112s1: renamed from eth1 May 17 00:48:35.205618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:48:35.241405 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:48:35.247637 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:48:35.262088 systemd[1]: Starting disk-uuid.service... May 17 00:48:35.286373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:35.304929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:36.301109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:36.301164 disk-uuid[604]: The operation has completed successfully. May 17 00:48:36.357845 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:48:36.361033 systemd[1]: Finished disk-uuid.service. May 17 00:48:36.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.375290 systemd[1]: Starting verity-setup.service... May 17 00:48:36.412921 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:48:36.675478 systemd[1]: Found device dev-mapper-usr.device. May 17 00:48:36.681439 systemd[1]: Mounting sysusr-usr.mount... May 17 00:48:36.694075 systemd[1]: Finished verity-setup.service. May 17 00:48:36.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.747923 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:48:36.744052 systemd[1]: Mounted sysusr-usr.mount. May 17 00:48:36.748067 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:48:36.748812 systemd[1]: Starting ignition-setup.service... May 17 00:48:36.764516 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:48:36.798218 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:36.798270 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:36.802751 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:36.838807 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:48:36.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.847000 audit: BPF prog-id=9 op=LOAD May 17 00:48:36.848039 systemd[1]: Starting systemd-networkd.service... May 17 00:48:36.871334 systemd-networkd[868]: lo: Link UP May 17 00:48:36.871346 systemd-networkd[868]: lo: Gained carrier May 17 00:48:36.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.872080 systemd-networkd[868]: Enumeration completed May 17 00:48:36.874591 systemd[1]: Started systemd-networkd.service. May 17 00:48:36.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.879306 systemd[1]: Reached target network.target. May 17 00:48:36.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.887879 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:48:36.889007 systemd[1]: Starting iscsiuio.service... May 17 00:48:36.937500 iscsid[880]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:48:36.937500 iscsid[880]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:48:36.937500 iscsid[880]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:48:36.937500 iscsid[880]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:48:36.937500 iscsid[880]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:48:36.937500 iscsid[880]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:48:36.937500 iscsid[880]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:48:36.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.900145 systemd[1]: Started iscsiuio.service. May 17 00:48:37.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.904536 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:48:37.056970 kernel: kauditd_printk_skb: 16 callbacks suppressed May 17 00:48:37.056993 kernel: audit: type=1130 audit(1747442917.027:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.908334 systemd[1]: Starting iscsid.service... May 17 00:48:36.912834 systemd[1]: Started iscsid.service. May 17 00:48:36.922094 systemd[1]: Starting dracut-initqueue.service... May 17 00:48:36.934115 systemd[1]: Finished dracut-initqueue.service. May 17 00:48:37.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.942272 systemd[1]: Reached target remote-fs-pre.target. May 17 00:48:37.098923 kernel: audit: type=1130 audit(1747442917.078:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:36.953878 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:48:36.989470 systemd[1]: Reached target remote-fs.target. May 17 00:48:36.999793 systemd[1]: Starting dracut-pre-mount.service... May 17 00:48:37.022820 systemd[1]: Finished dracut-pre-mount.service. May 17 00:48:37.073823 systemd[1]: Finished ignition-setup.service. May 17 00:48:37.099063 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:48:37.163921 kernel: mlx5_core cb90:00:02.0 enP52112s1: Link up May 17 00:48:37.250291 kernel: hv_netvsc 000d3afc-9186-000d-3afc-9186000d3afc eth0: Data path switched to VF: enP52112s1 May 17 00:48:37.250487 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:48:37.251050 systemd-networkd[868]: enP52112s1: Link UP May 17 00:48:37.251146 systemd-networkd[868]: eth0: Link UP May 17 00:48:37.251262 systemd-networkd[868]: eth0: Gained carrier May 17 00:48:37.264285 systemd-networkd[868]: enP52112s1: Gained carrier May 17 00:48:37.275953 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:48:38.977044 systemd-networkd[868]: eth0: Gained IPv6LL May 17 00:48:40.028461 ignition[895]: Ignition 2.14.0 May 17 00:48:40.028473 ignition[895]: Stage: fetch-offline May 17 00:48:40.028528 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:40.028551 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:40.071308 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:40.071453 ignition[895]: parsed url from cmdline: "" May 17 00:48:40.078005 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:48:40.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.071457 ignition[895]: no config URL provided May 17 00:48:40.120446 kernel: audit: type=1130 audit(1747442920.082:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.103643 systemd[1]: Starting ignition-fetch.service... May 17 00:48:40.071462 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:48:40.071470 ignition[895]: no config at "/usr/lib/ignition/user.ign" May 17 00:48:40.071475 ignition[895]: failed to fetch config: resource requires networking May 17 00:48:40.071774 ignition[895]: Ignition finished successfully May 17 00:48:40.110034 ignition[901]: Ignition 2.14.0 May 17 00:48:40.110040 ignition[901]: Stage: fetch May 17 00:48:40.110136 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:40.110154 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:40.120305 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:40.121468 ignition[901]: parsed url from cmdline: "" May 17 00:48:40.121473 ignition[901]: no config URL provided May 17 00:48:40.121480 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:48:40.121494 ignition[901]: no config at "/usr/lib/ignition/user.ign" May 17 00:48:40.121529 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:48:40.206048 ignition[901]: GET result: OK May 17 00:48:40.206125 ignition[901]: config has been read from IMDS userdata May 17 00:48:40.209174 unknown[901]: fetched base config from "system" May 17 00:48:40.242673 kernel: audit: type=1130 audit(1747442920.218:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.206169 ignition[901]: parsing config with SHA512: 0c73df78e05b360eed13967361b17c3673ffdc75aa3f2042de81a31aea1dc9994eb1a3c4ac27e0dbdeaa0ee5188f8bdcbbaccd7acb3f798ab06efdbb2db0ff65 May 17 00:48:40.209190 unknown[901]: fetched base config from "system" May 17 00:48:40.209708 ignition[901]: fetch: fetch complete May 17 00:48:40.209196 unknown[901]: fetched user config from "azure" May 17 00:48:40.209713 ignition[901]: fetch: fetch passed May 17 00:48:40.214291 systemd[1]: Finished ignition-fetch.service. May 17 00:48:40.287710 kernel: audit: type=1130 audit(1747442920.264:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.209751 ignition[901]: Ignition finished successfully May 17 00:48:40.219873 systemd[1]: Starting ignition-kargs.service... May 17 00:48:40.250148 ignition[907]: Ignition 2.14.0 May 17 00:48:40.326121 kernel: audit: type=1130 audit(1747442920.302:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.260460 systemd[1]: Finished ignition-kargs.service. May 17 00:48:40.250155 ignition[907]: Stage: kargs May 17 00:48:40.265919 systemd[1]: Starting ignition-disks.service... May 17 00:48:40.250263 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:40.297637 systemd[1]: Finished ignition-disks.service. May 17 00:48:40.250281 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:40.302469 systemd[1]: Reached target initrd-root-device.target. May 17 00:48:40.252940 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:40.326313 systemd[1]: Reached target local-fs-pre.target. May 17 00:48:40.258438 ignition[907]: kargs: kargs passed May 17 00:48:40.334649 systemd[1]: Reached target local-fs.target. May 17 00:48:40.258489 ignition[907]: Ignition finished successfully May 17 00:48:40.342134 systemd[1]: Reached target sysinit.target. May 17 00:48:40.277446 ignition[913]: Ignition 2.14.0 May 17 00:48:40.350223 systemd[1]: Reached target basic.target. May 17 00:48:40.277453 ignition[913]: Stage: disks May 17 00:48:40.361334 systemd[1]: Starting systemd-fsck-root.service... May 17 00:48:40.277567 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:40.277593 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:40.291723 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:40.295138 ignition[913]: disks: disks passed May 17 00:48:40.295220 ignition[913]: Ignition finished successfully May 17 00:48:40.469202 systemd-fsck[921]: ROOT: clean, 619/7326000 files, 481078/7359488 blocks May 17 00:48:40.483222 systemd[1]: Finished systemd-fsck-root.service. May 17 00:48:40.511333 kernel: audit: type=1130 audit(1747442920.487:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:40.488659 systemd[1]: Mounting sysroot.mount... May 17 00:48:40.528921 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:48:40.530037 systemd[1]: Mounted sysroot.mount. May 17 00:48:40.534022 systemd[1]: Reached target initrd-root-fs.target. May 17 00:48:40.578151 systemd[1]: Mounting sysroot-usr.mount... May 17 00:48:40.583281 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:48:40.595795 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:48:40.595838 systemd[1]: Reached target ignition-diskful.target. May 17 00:48:40.611064 systemd[1]: Mounted sysroot-usr.mount. May 17 00:48:40.698111 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:48:40.703408 systemd[1]: Starting initrd-setup-root.service... May 17 00:48:40.733799 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (932) May 17 00:48:40.733855 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:40.733865 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:48:40.744614 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:40.749518 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:40.754800 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:48:40.779188 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory May 17 00:48:40.806697 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:48:40.819426 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:48:41.368221 systemd[1]: Finished initrd-setup-root.service. May 17 00:48:41.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:41.373726 systemd[1]: Starting ignition-mount.service... May 17 00:48:41.402075 kernel: audit: type=1130 audit(1747442921.372:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:41.402308 systemd[1]: Starting sysroot-boot.service... May 17 00:48:41.408583 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:48:41.408695 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:48:41.431267 systemd[1]: Finished sysroot-boot.service. May 17 00:48:41.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:41.459589 kernel: audit: type=1130 audit(1747442921.436:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:41.467145 ignition[1001]: INFO : Ignition 2.14.0 May 17 00:48:41.467145 ignition[1001]: INFO : Stage: mount May 17 00:48:41.467145 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:41.467145 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:41.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:41.515396 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:41.515396 ignition[1001]: INFO : mount: mount passed May 17 00:48:41.515396 ignition[1001]: INFO : Ignition finished successfully May 17 00:48:41.532917 kernel: audit: type=1130 audit(1747442921.488:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:41.478820 systemd[1]: Finished ignition-mount.service. May 17 00:48:42.058151 coreos-metadata[931]: May 17 00:48:42.058 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:48:42.068123 coreos-metadata[931]: May 17 00:48:42.068 INFO Fetch successful May 17 00:48:42.101233 coreos-metadata[931]: May 17 00:48:42.101 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:48:42.113962 coreos-metadata[931]: May 17 00:48:42.113 INFO Fetch successful May 17 00:48:42.132941 coreos-metadata[931]: May 17 00:48:42.130 INFO wrote hostname ci-3510.3.7-n-e6f3637a46 to /sysroot/etc/hostname May 17 00:48:42.134204 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:48:42.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:42.147069 systemd[1]: Starting ignition-files.service... May 17 00:48:42.174420 kernel: audit: type=1130 audit(1747442922.145:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:42.176765 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:48:42.200020 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1010) May 17 00:48:42.200058 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:42.200069 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:42.204756 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:42.214350 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:48:42.230985 ignition[1029]: INFO : Ignition 2.14.0 May 17 00:48:42.235107 ignition[1029]: INFO : Stage: files May 17 00:48:42.235107 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:42.235107 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:42.258459 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:42.258459 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping May 17 00:48:42.258459 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:48:42.258459 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:48:42.297113 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:48:42.304502 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:48:42.319927 unknown[1029]: wrote ssh authorized keys file for user: core May 17 00:48:42.325324 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:48:42.325324 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:48:42.325324 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:48:42.325324 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:48:42.325324 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:48:42.424951 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:48:42.641098 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:48:42.651573 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:48:42.651573 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:48:42.651573 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:48:42.680251 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem144005808" May 17 00:48:42.680251 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem144005808": device or resource busy May 17 00:48:42.680251 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem144005808", trying btrfs: device or resource busy May 17 00:48:42.666151 systemd[1]: mnt-oem144005808.mount: Deactivated successfully. May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem144005808" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem144005808" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem144005808" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem144005808" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1755602924" May 17 00:48:42.833342 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1755602924": device or resource busy May 17 00:48:42.833342 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1755602924", trying btrfs: device or resource busy May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1755602924" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1755602924" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1755602924" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1755602924" May 17 00:48:42.833342 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:48:42.985622 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:42.985622 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:48:43.546435 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK May 17 00:48:43.750097 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(14): [started] processing unit "waagent.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(14): [finished] processing unit "waagent.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(15): [started] processing unit "nvidia.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(15): [finished] processing unit "nvidia.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(16): [started] processing unit "containerd.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(16): [finished] processing unit "containerd.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(18): [started] processing unit "prepare-helm.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:48:43.761718 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:48:43.761718 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:48:43.761718 ignition[1029]: INFO : files: files passed May 17 00:48:44.112615 kernel: audit: type=1130 audit(1747442923.779:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112647 kernel: audit: type=1130 audit(1747442923.856:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112658 kernel: audit: type=1131 audit(1747442923.856:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112667 kernel: audit: type=1130 audit(1747442923.904:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112677 kernel: audit: type=1130 audit(1747442923.961:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112685 kernel: audit: type=1131 audit(1747442923.961:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112697 kernel: audit: type=1130 audit(1747442924.057:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:43.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:43.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:43.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:43.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:43.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:43.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.112876 ignition[1029]: INFO : Ignition finished successfully May 17 00:48:43.769112 systemd[1]: Finished ignition-files.service. May 17 00:48:43.803977 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:48:44.158993 kernel: audit: type=1131 audit(1747442924.134:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.159065 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:48:43.808981 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:48:43.809787 systemd[1]: Starting ignition-quench.service... May 17 00:48:43.827449 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:48:43.827571 systemd[1]: Finished ignition-quench.service. May 17 00:48:43.899662 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:48:43.905289 systemd[1]: Reached target ignition-complete.target. May 17 00:48:43.931395 systemd[1]: Starting initrd-parse-etc.service... May 17 00:48:43.955687 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:48:43.955798 systemd[1]: Finished initrd-parse-etc.service. May 17 00:48:43.961958 systemd[1]: Reached target initrd-fs.target. May 17 00:48:44.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.006016 systemd[1]: Reached target initrd.target. May 17 00:48:44.286835 kernel: audit: type=1131 audit(1747442924.260:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.017304 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:48:44.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.024296 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:48:44.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.053030 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:48:44.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.058226 systemd[1]: Starting initrd-cleanup.service... May 17 00:48:44.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.099615 systemd[1]: Stopped target nss-lookup.target. May 17 00:48:44.108695 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:48:44.117242 systemd[1]: Stopped target timers.target. May 17 00:48:44.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.350741 ignition[1068]: INFO : Ignition 2.14.0 May 17 00:48:44.350741 ignition[1068]: INFO : Stage: umount May 17 00:48:44.350741 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:44.350741 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:44.350741 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:44.350741 ignition[1068]: INFO : umount: umount passed May 17 00:48:44.350741 ignition[1068]: INFO : Ignition finished successfully May 17 00:48:44.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.126190 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:48:44.126261 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:48:44.134355 systemd[1]: Stopped target initrd.target. May 17 00:48:44.158659 systemd[1]: Stopped target basic.target. May 17 00:48:44.162737 systemd[1]: Stopped target ignition-complete.target. May 17 00:48:44.176544 systemd[1]: Stopped target ignition-diskful.target. May 17 00:48:44.191331 systemd[1]: Stopped target initrd-root-device.target. May 17 00:48:44.200270 systemd[1]: Stopped target remote-fs.target. May 17 00:48:44.209240 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:48:44.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.217458 systemd[1]: Stopped target sysinit.target. May 17 00:48:44.226233 systemd[1]: Stopped target local-fs.target. May 17 00:48:44.234637 systemd[1]: Stopped target local-fs-pre.target. May 17 00:48:44.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.242862 systemd[1]: Stopped target swap.target. May 17 00:48:44.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.252071 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:48:44.546000 audit: BPF prog-id=6 op=UNLOAD May 17 00:48:44.252141 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:48:44.260586 systemd[1]: Stopped target cryptsetup.target. May 17 00:48:44.286711 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:48:44.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.286783 systemd[1]: Stopped dracut-initqueue.service. May 17 00:48:44.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.291167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:48:44.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.291205 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:48:44.303021 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:48:44.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.303059 systemd[1]: Stopped ignition-files.service. May 17 00:48:44.311221 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:48:44.311260 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:48:44.323635 systemd[1]: Stopping ignition-mount.service... May 17 00:48:44.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.333309 systemd[1]: Stopping sysroot-boot.service... May 17 00:48:44.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.341357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:48:44.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.341431 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:48:44.350062 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:48:44.350115 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:48:44.355773 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:48:44.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.355864 systemd[1]: Finished initrd-cleanup.service. May 17 00:48:44.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.365312 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:48:44.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.365826 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:48:44.729778 kernel: hv_netvsc 000d3afc-9186-000d-3afc-9186000d3afc eth0: Data path switched from VF: enP52112s1 May 17 00:48:44.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.365987 systemd[1]: Stopped ignition-mount.service. May 17 00:48:44.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.373344 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:48:44.373409 systemd[1]: Stopped ignition-disks.service. May 17 00:48:44.384305 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:48:44.384354 systemd[1]: Stopped ignition-kargs.service. May 17 00:48:44.389037 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:48:44.389076 systemd[1]: Stopped ignition-fetch.service. May 17 00:48:44.407496 systemd[1]: Stopped target network.target. May 17 00:48:44.411703 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:48:44.411761 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:48:44.428963 systemd[1]: Stopped target paths.target. May 17 00:48:44.437705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:48:44.448243 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:48:44.456196 systemd[1]: Stopped target slices.target. May 17 00:48:44.465218 systemd[1]: Stopped target sockets.target. May 17 00:48:44.473907 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:48:44.473942 systemd[1]: Closed iscsid.socket. May 17 00:48:44.481574 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:48:44.481597 systemd[1]: Closed iscsiuio.socket. May 17 00:48:44.489415 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:48:44.489460 systemd[1]: Stopped ignition-setup.service. May 17 00:48:44.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:44.497456 systemd[1]: Stopping systemd-networkd.service... May 17 00:48:44.507269 systemd[1]: Stopping systemd-resolved.service... May 17 00:48:44.514944 systemd-networkd[868]: eth0: DHCPv6 lease lost May 17 00:48:44.847000 audit: BPF prog-id=9 op=UNLOAD May 17 00:48:44.519584 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:48:44.519707 systemd[1]: Stopped systemd-networkd.service. May 17 00:48:44.525957 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:48:44.526050 systemd[1]: Stopped systemd-resolved.service. May 17 00:48:44.534637 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:48:44.534676 systemd[1]: Closed systemd-networkd.socket. May 17 00:48:44.542720 systemd[1]: Stopping network-cleanup.service... May 17 00:48:44.881000 audit: BPF prog-id=5 op=UNLOAD May 17 00:48:44.881000 audit: BPF prog-id=4 op=UNLOAD May 17 00:48:44.881000 audit: BPF prog-id=3 op=UNLOAD May 17 00:48:44.883000 audit: BPF prog-id=8 op=UNLOAD May 17 00:48:44.883000 audit: BPF prog-id=7 op=UNLOAD May 17 00:48:44.554995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:48:44.555065 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:48:44.563587 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:48:44.563651 systemd[1]: Stopped systemd-sysctl.service. May 17 00:48:44.576936 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:48:44.918463 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). May 17 00:48:44.918520 iscsid[880]: iscsid shutting down. May 17 00:48:44.576989 systemd[1]: Stopped systemd-modules-load.service. May 17 00:48:44.582530 systemd[1]: Stopping systemd-udevd.service... May 17 00:48:44.593958 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:48:44.594472 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:48:44.594594 systemd[1]: Stopped systemd-udevd.service. May 17 00:48:44.603111 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:48:44.603156 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:48:44.615707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:48:44.615753 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:48:44.625610 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:48:44.625664 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:48:44.634531 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:48:44.634584 systemd[1]: Stopped dracut-cmdline.service. May 17 00:48:44.642714 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:48:44.642758 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:48:44.655997 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:48:44.670796 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:48:44.670881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:48:44.684742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:48:44.684800 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:48:44.690415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:48:44.690458 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:48:44.700967 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:48:44.701461 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:48:44.701565 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:48:44.720590 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:48:44.720710 systemd[1]: Stopped sysroot-boot.service. May 17 00:48:44.725074 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:48:44.725120 systemd[1]: Stopped initrd-setup-root.service. May 17 00:48:44.817107 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:48:44.817205 systemd[1]: Stopped network-cleanup.service. May 17 00:48:44.826245 systemd[1]: Reached target initrd-switch-root.target. May 17 00:48:44.835448 systemd[1]: Starting initrd-switch-root.service... May 17 00:48:44.880851 systemd[1]: Switching root. May 17 00:48:44.919469 systemd-journald[276]: Journal stopped May 17 00:49:07.222309 kernel: mlx5_core cb90:00:02.0: poll_health:739:(pid 0): device's health compromised - reached miss count May 17 00:49:07.222333 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:49:07.222346 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:49:07.222355 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:49:07.222363 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:49:07.222372 kernel: SELinux: policy capability open_perms=1 May 17 00:49:07.222381 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:49:07.222390 kernel: SELinux: policy capability always_check_network=0 May 17 00:49:07.222398 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:49:07.222408 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:49:07.222416 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:49:07.222424 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:49:07.222433 kernel: kauditd_printk_skb: 39 callbacks suppressed May 17 00:49:07.222442 kernel: audit: type=1403 audit(1747442929.206:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:49:07.222455 systemd[1]: Successfully loaded SELinux policy in 261.513ms. May 17 00:49:07.222466 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.892ms. May 17 00:49:07.222477 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:49:07.222487 systemd[1]: Detected virtualization microsoft. May 17 00:49:07.222496 systemd[1]: Detected architecture arm64. May 17 00:49:07.222505 systemd[1]: Detected first boot. May 17 00:49:07.222516 systemd[1]: Hostname set to . May 17 00:49:07.222525 systemd[1]: Initializing machine ID from random generator. May 17 00:49:07.222534 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:49:07.222544 kernel: audit: type=1400 audit(1747442931.990:87): avc: denied { associate } for pid=1120 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:49:07.222554 kernel: audit: type=1300 audit(1747442931.990:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400002221c a1=40000282b8 a2=4000026440 a3=32 items=0 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:07.222564 kernel: audit: type=1327 audit(1747442931.990:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:49:07.222574 kernel: audit: type=1400 audit(1747442932.004:88): avc: denied { associate } for pid=1120 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:49:07.222585 kernel: audit: type=1300 audit(1747442932.004:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000222f5 a2=1ed a3=0 items=2 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:07.222594 kernel: audit: type=1307 audit(1747442932.004:88): cwd="/" May 17 00:49:07.222603 kernel: audit: type=1302 audit(1747442932.004:88): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:07.222612 kernel: audit: type=1302 audit(1747442932.004:88): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:07.222622 kernel: audit: type=1327 audit(1747442932.004:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:49:07.222631 systemd[1]: Populated /etc with preset unit settings. May 17 00:49:07.222642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:49:07.222652 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:49:07.222663 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:49:07.222673 systemd[1]: Queued start job for default target multi-user.target. May 17 00:49:07.222682 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:49:07.222692 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:49:07.222702 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:49:07.222713 systemd[1]: Created slice system-getty.slice. May 17 00:49:07.222724 systemd[1]: Created slice system-modprobe.slice. May 17 00:49:07.222734 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:49:07.222743 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:49:07.222753 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:49:07.222763 systemd[1]: Created slice user.slice. May 17 00:49:07.222773 systemd[1]: Started systemd-ask-password-console.path. May 17 00:49:07.222782 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:49:07.222792 systemd[1]: Set up automount boot.automount. May 17 00:49:07.222803 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:49:07.222813 systemd[1]: Reached target integritysetup.target. May 17 00:49:07.222823 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:49:07.222832 systemd[1]: Reached target remote-fs.target. May 17 00:49:07.222842 systemd[1]: Reached target slices.target. May 17 00:49:07.222851 systemd[1]: Reached target swap.target. May 17 00:49:07.222861 systemd[1]: Reached target torcx.target. May 17 00:49:07.222871 systemd[1]: Reached target veritysetup.target. May 17 00:49:07.222883 systemd[1]: Listening on systemd-coredump.socket. May 17 00:49:07.222892 systemd[1]: Listening on systemd-initctl.socket. May 17 00:49:07.227126 kernel: audit: type=1400 audit(1747442946.786:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:49:07.227147 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:49:07.227158 kernel: audit: type=1335 audit(1747442946.786:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:49:07.227168 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:49:07.227178 systemd[1]: Listening on systemd-journald.socket. May 17 00:49:07.227189 systemd[1]: Listening on systemd-networkd.socket. May 17 00:49:07.227205 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:49:07.227217 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:49:07.227227 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:49:07.227237 systemd[1]: Mounting dev-hugepages.mount... May 17 00:49:07.227246 systemd[1]: Mounting dev-mqueue.mount... May 17 00:49:07.227258 systemd[1]: Mounting media.mount... May 17 00:49:07.227268 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:49:07.227278 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:49:07.227288 systemd[1]: Mounting tmp.mount... May 17 00:49:07.227319 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:49:07.227330 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:07.227339 systemd[1]: Starting kmod-static-nodes.service... May 17 00:49:07.227352 systemd[1]: Starting modprobe@configfs.service... May 17 00:49:07.227362 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:07.227374 systemd[1]: Starting modprobe@drm.service... May 17 00:49:07.227384 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:07.227395 systemd[1]: Starting modprobe@fuse.service... May 17 00:49:07.227405 systemd[1]: Starting modprobe@loop.service... May 17 00:49:07.227416 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:49:07.227426 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:49:07.227436 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:49:07.227446 systemd[1]: Starting systemd-journald.service... May 17 00:49:07.227457 systemd[1]: Starting systemd-modules-load.service... May 17 00:49:07.227467 kernel: loop: module loaded May 17 00:49:07.227477 systemd[1]: Starting systemd-network-generator.service... May 17 00:49:07.227487 systemd[1]: Starting systemd-remount-fs.service... May 17 00:49:07.227496 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:49:07.227506 kernel: fuse: init (API version 7.34) May 17 00:49:07.227515 systemd[1]: Mounted dev-hugepages.mount. May 17 00:49:07.227525 systemd[1]: Mounted dev-mqueue.mount. May 17 00:49:07.227535 systemd[1]: Mounted media.mount. May 17 00:49:07.227546 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:49:07.227557 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:49:07.227567 kernel: audit: type=1305 audit(1747442947.201:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:49:07.227577 systemd[1]: Mounted tmp.mount. May 17 00:49:07.227591 systemd-journald[1233]: Journal started May 17 00:49:07.227649 systemd-journald[1233]: Runtime Journal (/run/log/journal/8445c46cdba54b868b8996e6fd2d089e) is 8.0M, max 78.5M, 70.5M free. May 17 00:49:06.786000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:49:07.201000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:49:07.201000 audit[1233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc85b5e00 a2=4000 a3=1 items=0 ppid=1 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:07.261277 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:49:07.261325 kernel: audit: type=1300 audit(1747442947.201:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc85b5e00 a2=4000 a3=1 items=0 ppid=1 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:07.261341 systemd[1]: Started systemd-journald.service. May 17 00:49:07.201000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:49:07.265664 kernel: audit: type=1327 audit(1747442947.201:91): proctitle="/usr/lib/systemd/systemd-journald" May 17 00:49:07.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.295273 kernel: audit: type=1130 audit(1747442947.260:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.295353 kernel: audit: type=1130 audit(1747442947.294:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.295815 systemd[1]: Finished kmod-static-nodes.service. May 17 00:49:07.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.317804 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:49:07.318067 systemd[1]: Finished modprobe@configfs.service. May 17 00:49:07.339868 kernel: audit: type=1130 audit(1747442947.316:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.340645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:07.340892 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:07.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.377700 kernel: audit: type=1130 audit(1747442947.339:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.377748 kernel: audit: type=1131 audit(1747442947.339:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.378653 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:49:07.378934 systemd[1]: Finished modprobe@drm.service. May 17 00:49:07.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.384075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:07.384299 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:07.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.389502 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:49:07.389721 systemd[1]: Finished modprobe@fuse.service. May 17 00:49:07.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.394959 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:07.395234 systemd[1]: Finished modprobe@loop.service. May 17 00:49:07.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.400395 systemd[1]: Finished systemd-modules-load.service. May 17 00:49:07.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.405660 systemd[1]: Finished systemd-network-generator.service. May 17 00:49:07.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.411386 systemd[1]: Finished systemd-remount-fs.service. May 17 00:49:07.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.416661 systemd[1]: Reached target network-pre.target. May 17 00:49:07.422543 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:49:07.428475 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:49:07.435545 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:49:07.468863 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:49:07.474264 systemd[1]: Starting systemd-journal-flush.service... May 17 00:49:07.478597 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:49:07.479747 systemd[1]: Starting systemd-random-seed.service... May 17 00:49:07.483998 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:49:07.485154 systemd[1]: Starting systemd-sysctl.service... May 17 00:49:07.490382 systemd[1]: Starting systemd-sysusers.service... May 17 00:49:07.496813 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:49:07.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.501709 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:49:07.507422 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:49:07.513281 systemd[1]: Starting systemd-udev-settle.service... May 17 00:49:07.521978 systemd[1]: Finished systemd-random-seed.service. May 17 00:49:07.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.528796 systemd[1]: Reached target first-boot-complete.target. May 17 00:49:07.535183 udevadm[1273]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:49:07.543696 systemd-journald[1233]: Time spent on flushing to /var/log/journal/8445c46cdba54b868b8996e6fd2d089e is 16.253ms for 1035 entries. May 17 00:49:07.543696 systemd-journald[1233]: System Journal (/var/log/journal/8445c46cdba54b868b8996e6fd2d089e) is 8.0M, max 2.6G, 2.6G free. May 17 00:49:07.636259 systemd-journald[1233]: Received client request to flush runtime journal. May 17 00:49:07.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:07.604307 systemd[1]: Finished systemd-sysctl.service. May 17 00:49:07.637454 systemd[1]: Finished systemd-journal-flush.service. May 17 00:49:07.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:08.124221 systemd[1]: Finished systemd-sysusers.service. May 17 00:49:08.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:08.130490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:49:08.477027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:49:08.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:09.499163 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:49:09.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:09.505367 systemd[1]: Starting systemd-udevd.service... May 17 00:49:09.523463 systemd-udevd[1284]: Using default interface naming scheme 'v252'. May 17 00:49:09.792626 systemd[1]: Started systemd-udevd.service. May 17 00:49:09.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:09.804111 systemd[1]: Starting systemd-networkd.service... May 17 00:49:09.829334 systemd[1]: Found device dev-ttyAMA0.device. May 17 00:49:09.907423 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:49:09.907538 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:49:09.907561 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:49:09.915131 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:49:09.920459 kernel: Console: switching to colour dummy device 80x25 May 17 00:49:09.926923 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:49:09.947367 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:49:09.947474 kernel: hv_vmbus: registering driver hv_utils May 17 00:49:09.954833 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:49:09.954954 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:49:09.650764 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:49:09.722570 systemd-journald[1233]: Time jumped backwards, rotating. May 17 00:49:09.722652 kernel: hv_vmbus: registering driver hv_balloon May 17 00:49:09.722666 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:49:09.722681 kernel: hv_balloon: Memory hot add disabled on ARM64 May 17 00:49:09.656000 audit[1290]: AVC avc: denied { confidentiality } for pid=1290 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:49:09.656000 audit[1290]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaeb78e130 a1=aa2c a2=ffffbef424b0 a3=aaaaeb6ee010 items=12 ppid=1284 pid=1290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:09.656000 audit: CWD cwd="/" May 17 00:49:09.656000 audit: PATH item=0 name=(null) inode=7322 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=1 name=(null) inode=10692 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=2 name=(null) inode=10692 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=3 name=(null) inode=10693 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=4 name=(null) inode=10692 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=5 name=(null) inode=10694 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=6 name=(null) inode=10692 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=7 name=(null) inode=10695 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=8 name=(null) inode=10692 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=9 name=(null) inode=10696 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=10 name=(null) inode=10692 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PATH item=11 name=(null) inode=10697 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:09.656000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:49:09.666414 systemd[1]: Starting systemd-userdbd.service... May 17 00:49:09.742679 systemd[1]: Started systemd-userdbd.service. May 17 00:49:09.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:09.876654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:49:09.882509 systemd[1]: Finished systemd-udev-settle.service. May 17 00:49:09.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:09.892819 systemd[1]: Starting lvm2-activation-early.service... May 17 00:49:09.987372 systemd-networkd[1305]: lo: Link UP May 17 00:49:09.987809 systemd-networkd[1305]: lo: Gained carrier May 17 00:49:09.988320 systemd-networkd[1305]: Enumeration completed May 17 00:49:09.988526 systemd[1]: Started systemd-networkd.service. May 17 00:49:09.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:09.994459 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:49:10.016635 systemd-networkd[1305]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:49:10.067500 kernel: mlx5_core cb90:00:02.0 enP52112s1: Link up May 17 00:49:10.133781 kernel: hv_netvsc 000d3afc-9186-000d-3afc-9186000d3afc eth0: Data path switched to VF: enP52112s1 May 17 00:49:10.134640 systemd-networkd[1305]: enP52112s1: Link UP May 17 00:49:10.134780 systemd-networkd[1305]: eth0: Link UP May 17 00:49:10.134789 systemd-networkd[1305]: eth0: Gained carrier May 17 00:49:10.140737 systemd-networkd[1305]: enP52112s1: Gained carrier May 17 00:49:10.148597 systemd-networkd[1305]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:49:10.367725 lvm[1363]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:49:10.395534 systemd[1]: Finished lvm2-activation-early.service. May 17 00:49:10.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:10.400793 systemd[1]: Reached target cryptsetup.target. May 17 00:49:10.407323 systemd[1]: Starting lvm2-activation.service... May 17 00:49:10.411578 lvm[1366]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:49:10.437517 systemd[1]: Finished lvm2-activation.service. May 17 00:49:10.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:10.442783 systemd[1]: Reached target local-fs-pre.target. May 17 00:49:10.447391 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:49:10.447510 systemd[1]: Reached target local-fs.target. May 17 00:49:10.451826 systemd[1]: Reached target machines.target. May 17 00:49:10.457690 systemd[1]: Starting ldconfig.service... May 17 00:49:10.461744 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:10.461891 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:10.463189 systemd[1]: Starting systemd-boot-update.service... May 17 00:49:10.468674 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:49:10.475598 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:49:10.481367 systemd[1]: Starting systemd-sysext.service... May 17 00:49:10.525978 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1369 (bootctl) May 17 00:49:10.527337 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:49:10.558147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:49:10.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:10.568475 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:49:10.574466 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:49:10.574896 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:49:10.623519 kernel: loop0: detected capacity change from 0 to 203944 May 17 00:49:10.966818 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:49:10.967520 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:49:10.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:10.984647 systemd-fsck[1377]: fsck.fat 4.2 (2021-01-31) May 17 00:49:10.984647 systemd-fsck[1377]: /dev/sda1: 236 files, 117182/258078 clusters May 17 00:49:10.988229 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:49:10.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:10.997556 systemd[1]: Mounting boot.mount... May 17 00:49:11.012084 systemd[1]: Mounted boot.mount. May 17 00:49:11.023127 systemd[1]: Finished systemd-boot-update.service. May 17 00:49:11.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.072506 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:49:11.093524 kernel: loop1: detected capacity change from 0 to 203944 May 17 00:49:11.098320 (sd-sysext)[1393]: Using extensions 'kubernetes'. May 17 00:49:11.098669 (sd-sysext)[1393]: Merged extensions into '/usr'. May 17 00:49:11.115591 systemd[1]: Mounting usr-share-oem.mount... May 17 00:49:11.119551 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:11.120893 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:11.125992 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:11.131400 systemd[1]: Starting modprobe@loop.service... May 17 00:49:11.137179 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:11.137320 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:11.140087 systemd[1]: Mounted usr-share-oem.mount. May 17 00:49:11.144816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:11.144991 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:11.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.149866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:11.150022 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:11.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.155114 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:11.155323 systemd[1]: Finished modprobe@loop.service. May 17 00:49:11.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.160399 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:49:11.160656 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:49:11.161794 systemd[1]: Finished systemd-sysext.service. May 17 00:49:11.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.168287 systemd[1]: Starting ensure-sysext.service... May 17 00:49:11.173436 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:49:11.180664 systemd[1]: Reloading. May 17 00:49:11.186445 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:49:11.204716 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:49:11.224804 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:49:11.239574 /usr/lib/systemd/system-generators/torcx-generator[1427]: time="2025-05-17T00:49:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:49:11.239906 /usr/lib/systemd/system-generators/torcx-generator[1427]: time="2025-05-17T00:49:11Z" level=info msg="torcx already run" May 17 00:49:11.303626 systemd-networkd[1305]: eth0: Gained IPv6LL May 17 00:49:11.331755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:49:11.331927 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:49:11.349303 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:49:11.418784 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:49:11.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.433894 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:11.435467 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:11.441102 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:11.447050 systemd[1]: Starting modprobe@loop.service... May 17 00:49:11.451645 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:11.451885 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:11.452828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:11.453105 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:11.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.458702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:11.458950 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:11.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.464387 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:11.464792 systemd[1]: Finished modprobe@loop.service. May 17 00:49:11.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.471206 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:11.472899 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:11.478427 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:11.484368 systemd[1]: Starting modprobe@loop.service... May 17 00:49:11.488824 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:11.489087 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:11.490228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:11.490554 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:11.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.496071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:11.496337 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:11.499549 kernel: kauditd_printk_skb: 60 callbacks suppressed May 17 00:49:11.499606 kernel: audit: type=1130 audit(1747442951.494:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.535606 kernel: audit: type=1131 audit(1747442951.494:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.536692 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:11.537012 systemd[1]: Finished modprobe@loop.service. May 17 00:49:11.552121 kernel: audit: type=1130 audit(1747442951.534:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.552276 kernel: audit: type=1131 audit(1747442951.534:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.582822 systemd[1]: Finished ensure-sysext.service. May 17 00:49:11.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.612769 kernel: audit: type=1130 audit(1747442951.573:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.612858 kernel: audit: type=1131 audit(1747442951.573:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.612889 kernel: audit: type=1130 audit(1747442951.610:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.613323 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:11.614882 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:11.635340 systemd[1]: Starting modprobe@drm.service... May 17 00:49:11.641429 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:11.647318 systemd[1]: Starting modprobe@loop.service... May 17 00:49:11.651709 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:11.651873 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:11.652473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:11.652770 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:11.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.658735 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:49:11.659015 systemd[1]: Finished modprobe@drm.service. May 17 00:49:11.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.698206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:11.698479 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:11.699033 kernel: audit: type=1130 audit(1747442951.656:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.699627 kernel: audit: type=1131 audit(1747442951.656:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.699656 kernel: audit: type=1130 audit(1747442951.696:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.721983 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:11.722210 systemd[1]: Finished modprobe@loop.service. May 17 00:49:11.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:11.726945 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:49:11.726988 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:49:11.996405 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:49:12.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:12.003475 systemd[1]: Starting audit-rules.service... May 17 00:49:12.008526 systemd[1]: Starting clean-ca-certificates.service... May 17 00:49:12.014144 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:49:12.020981 systemd[1]: Starting systemd-resolved.service... May 17 00:49:12.027103 systemd[1]: Starting systemd-timesyncd.service... May 17 00:49:12.032441 systemd[1]: Starting systemd-update-utmp.service... May 17 00:49:12.037262 systemd[1]: Finished clean-ca-certificates.service. May 17 00:49:12.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:12.042731 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:49:12.077000 audit[1527]: SYSTEM_BOOT pid=1527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:49:12.081984 systemd[1]: Finished systemd-update-utmp.service. May 17 00:49:12.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:12.131540 systemd[1]: Started systemd-timesyncd.service. May 17 00:49:12.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:12.136221 systemd[1]: Reached target time-set.target. May 17 00:49:12.183677 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:49:12.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:12.205318 systemd-resolved[1524]: Positive Trust Anchors: May 17 00:49:12.205665 systemd-resolved[1524]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:49:12.205745 systemd-resolved[1524]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:49:12.331327 systemd-resolved[1524]: Using system hostname 'ci-3510.3.7-n-e6f3637a46'. May 17 00:49:12.333063 systemd[1]: Started systemd-resolved.service. May 17 00:49:12.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:12.337606 systemd[1]: Reached target network.target. May 17 00:49:12.342101 systemd[1]: Reached target network-online.target. May 17 00:49:12.347182 systemd[1]: Reached target nss-lookup.target. May 17 00:49:12.381678 augenrules[1544]: No rules May 17 00:49:12.380000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:49:12.380000 audit[1544]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd8671910 a2=420 a3=0 items=0 ppid=1520 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:12.380000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:49:12.382786 systemd[1]: Finished audit-rules.service. May 17 00:49:12.463557 systemd-timesyncd[1525]: Contacted time server 144.202.62.209:123 (0.flatcar.pool.ntp.org). May 17 00:49:12.463626 systemd-timesyncd[1525]: Initial clock synchronization to Sat 2025-05-17 00:49:12.462170 UTC. May 17 00:49:18.362362 ldconfig[1368]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:49:18.377992 systemd[1]: Finished ldconfig.service. May 17 00:49:18.385259 systemd[1]: Starting systemd-update-done.service... May 17 00:49:18.427863 systemd[1]: Finished systemd-update-done.service. May 17 00:49:18.433119 systemd[1]: Reached target sysinit.target. May 17 00:49:18.437433 systemd[1]: Started motdgen.path. May 17 00:49:18.441313 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:49:18.447718 systemd[1]: Started logrotate.timer. May 17 00:49:18.451678 systemd[1]: Started mdadm.timer. May 17 00:49:18.455399 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:49:18.460519 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:49:18.460552 systemd[1]: Reached target paths.target. May 17 00:49:18.464713 systemd[1]: Reached target timers.target. May 17 00:49:18.469392 systemd[1]: Listening on dbus.socket. May 17 00:49:18.474596 systemd[1]: Starting docker.socket... May 17 00:49:18.506456 systemd[1]: Listening on sshd.socket. May 17 00:49:18.510713 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:18.511140 systemd[1]: Listening on docker.socket. May 17 00:49:18.515179 systemd[1]: Reached target sockets.target. May 17 00:49:18.519411 systemd[1]: Reached target basic.target. May 17 00:49:18.523741 systemd[1]: System is tainted: cgroupsv1 May 17 00:49:18.523793 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:49:18.523816 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:49:18.525041 systemd[1]: Starting containerd.service... May 17 00:49:18.529909 systemd[1]: Starting dbus.service... May 17 00:49:18.534200 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:49:18.539847 systemd[1]: Starting extend-filesystems.service... May 17 00:49:18.544036 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:49:18.545608 systemd[1]: Starting kubelet.service... May 17 00:49:18.550350 systemd[1]: Starting motdgen.service... May 17 00:49:18.555351 systemd[1]: Started nvidia.service. May 17 00:49:18.561176 systemd[1]: Starting prepare-helm.service... May 17 00:49:18.566455 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:49:18.572136 systemd[1]: Starting sshd-keygen.service... May 17 00:49:18.578239 systemd[1]: Starting systemd-logind.service... May 17 00:49:18.582992 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:18.583086 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:49:18.586671 systemd[1]: Starting update-engine.service... May 17 00:49:18.594884 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:49:18.603389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:49:18.604857 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:49:18.639442 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:49:18.639706 systemd[1]: Finished motdgen.service. May 17 00:49:18.647374 jq[1558]: false May 17 00:49:18.647640 jq[1579]: true May 17 00:49:18.648925 extend-filesystems[1559]: Found loop1 May 17 00:49:18.655419 extend-filesystems[1559]: Found sda May 17 00:49:18.655419 extend-filesystems[1559]: Found sda1 May 17 00:49:18.655419 extend-filesystems[1559]: Found sda2 May 17 00:49:18.655419 extend-filesystems[1559]: Found sda3 May 17 00:49:18.655419 extend-filesystems[1559]: Found usr May 17 00:49:18.655419 extend-filesystems[1559]: Found sda4 May 17 00:49:18.655419 extend-filesystems[1559]: Found sda6 May 17 00:49:18.655419 extend-filesystems[1559]: Found sda7 May 17 00:49:18.655419 extend-filesystems[1559]: Found sda9 May 17 00:49:18.655419 extend-filesystems[1559]: Checking size of /dev/sda9 May 17 00:49:18.750807 extend-filesystems[1559]: Old size kept for /dev/sda9 May 17 00:49:18.750807 extend-filesystems[1559]: Found sr0 May 17 00:49:18.791282 tar[1582]: linux-arm64/helm May 17 00:49:18.791559 env[1585]: time="2025-05-17T00:49:18.723045673Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:49:18.664168 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:49:18.791860 jq[1597]: true May 17 00:49:18.664417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:49:18.716363 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:49:18.716599 systemd-logind[1574]: New seat seat0. May 17 00:49:18.751057 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:49:18.751327 systemd[1]: Finished extend-filesystems.service. May 17 00:49:18.810120 env[1585]: time="2025-05-17T00:49:18.810070331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:49:18.810748 env[1585]: time="2025-05-17T00:49:18.810728333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:18.812215 env[1585]: time="2025-05-17T00:49:18.812183251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:49:18.812300 env[1585]: time="2025-05-17T00:49:18.812287245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:18.812809 env[1585]: time="2025-05-17T00:49:18.812785297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:49:18.812899 env[1585]: time="2025-05-17T00:49:18.812885411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:49:18.813026 env[1585]: time="2025-05-17T00:49:18.813008284Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:49:18.813093 env[1585]: time="2025-05-17T00:49:18.813080160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:49:18.813297 env[1585]: time="2025-05-17T00:49:18.813277309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:18.813641 env[1585]: time="2025-05-17T00:49:18.813621769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:18.813866 env[1585]: time="2025-05-17T00:49:18.813847316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:49:18.813930 env[1585]: time="2025-05-17T00:49:18.813918312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:49:18.814052 env[1585]: time="2025-05-17T00:49:18.814035386Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:49:18.814115 env[1585]: time="2025-05-17T00:49:18.814102702Z" level=info msg="metadata content store policy set" policy=shared May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837114235Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837158912Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837171872Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837206790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837222349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837236588Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837250427Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837625366Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837646565Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837659644Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837673763Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837689442Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837834114Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:49:18.838516 env[1585]: time="2025-05-17T00:49:18.837911070Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838262370Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838289768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838312247Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838355165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838368804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838381963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838394322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838407002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838420801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838433000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838445919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:49:18.838851 env[1585]: time="2025-05-17T00:49:18.838460039Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839783163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839812762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839826081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839839960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839857159Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839871038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839887677Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:49:18.844609 env[1585]: time="2025-05-17T00:49:18.839920996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:49:18.841118 systemd[1]: Started containerd.service. May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.840112825Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.840165622Z" level=info msg="Connect containerd service" May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.840206019Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.840716510Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.840934338Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.840969696Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:49:18.844959 env[1585]: time="2025-05-17T00:49:18.841026773Z" level=info msg="containerd successfully booted in 0.118783s" May 17 00:49:18.862475 env[1585]: time="2025-05-17T00:49:18.853604419Z" level=info msg="Start subscribing containerd event" May 17 00:49:18.862475 env[1585]: time="2025-05-17T00:49:18.853728292Z" level=info msg="Start recovering state" May 17 00:49:18.862475 env[1585]: time="2025-05-17T00:49:18.853810007Z" level=info msg="Start event monitor" May 17 00:49:18.862475 env[1585]: time="2025-05-17T00:49:18.853836365Z" level=info msg="Start snapshots syncer" May 17 00:49:18.862475 env[1585]: time="2025-05-17T00:49:18.853850445Z" level=info msg="Start cni network conf syncer for default" May 17 00:49:18.862475 env[1585]: time="2025-05-17T00:49:18.853862964Z" level=info msg="Start streaming server" May 17 00:49:18.871108 bash[1621]: Updated "/home/core/.ssh/authorized_keys" May 17 00:49:18.872130 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:49:18.897106 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:49:19.032673 dbus-daemon[1557]: [system] SELinux support is enabled May 17 00:49:19.032881 systemd[1]: Started dbus.service. May 17 00:49:19.038191 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:49:19.038726 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:49:19.038217 systemd[1]: Reached target system-config.target. May 17 00:49:19.046839 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:49:19.046864 systemd[1]: Reached target user-config.target. May 17 00:49:19.053110 systemd[1]: Started systemd-logind.service. May 17 00:49:19.330856 update_engine[1578]: I0517 00:49:19.316213 1578 main.cc:92] Flatcar Update Engine starting May 17 00:49:19.392702 systemd[1]: Started update-engine.service. May 17 00:49:19.393110 update_engine[1578]: I0517 00:49:19.392753 1578 update_check_scheduler.cc:74] Next update check in 11m43s May 17 00:49:19.400737 systemd[1]: Started locksmithd.service. May 17 00:49:19.529890 tar[1582]: linux-arm64/LICENSE May 17 00:49:19.529890 tar[1582]: linux-arm64/README.md May 17 00:49:19.536325 systemd[1]: Finished prepare-helm.service. May 17 00:49:19.614268 systemd[1]: Started kubelet.service. May 17 00:49:20.050148 kubelet[1676]: E0517 00:49:20.050111 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:20.051588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:20.051737 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:20.733538 locksmithd[1668]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:49:20.780744 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:49:20.798151 systemd[1]: Finished sshd-keygen.service. May 17 00:49:20.804213 systemd[1]: Starting issuegen.service... May 17 00:49:20.809316 systemd[1]: Started waagent.service. May 17 00:49:20.816716 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:49:20.816937 systemd[1]: Finished issuegen.service. May 17 00:49:20.822723 systemd[1]: Starting systemd-user-sessions.service... May 17 00:49:20.863266 systemd[1]: Finished systemd-user-sessions.service. May 17 00:49:20.870001 systemd[1]: Started getty@tty1.service. May 17 00:49:20.875755 systemd[1]: Started serial-getty@ttyAMA0.service. May 17 00:49:20.880965 systemd[1]: Reached target getty.target. May 17 00:49:20.885430 systemd[1]: Reached target multi-user.target. May 17 00:49:20.891473 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:49:20.899752 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:49:20.899973 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:49:20.907874 systemd[1]: Startup finished in 16.792s (kernel) + 32.523s (userspace) = 49.315s. May 17 00:49:21.620079 login[1705]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 00:49:21.620446 login[1704]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:49:21.673616 systemd[1]: Created slice user-500.slice. May 17 00:49:21.674682 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:49:21.677300 systemd-logind[1574]: New session 2 of user core. May 17 00:49:21.717175 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:49:21.718637 systemd[1]: Starting user@500.service... May 17 00:49:21.761789 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:21.956056 systemd[1711]: Queued start job for default target default.target. May 17 00:49:21.956278 systemd[1711]: Reached target paths.target. May 17 00:49:21.956293 systemd[1711]: Reached target sockets.target. May 17 00:49:21.956304 systemd[1711]: Reached target timers.target. May 17 00:49:21.956315 systemd[1711]: Reached target basic.target. May 17 00:49:21.956429 systemd[1]: Started user@500.service. May 17 00:49:21.957284 systemd[1]: Started session-2.scope. May 17 00:49:21.957771 systemd[1711]: Reached target default.target. May 17 00:49:21.958055 systemd[1711]: Startup finished in 189ms. May 17 00:49:22.621532 login[1705]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:49:22.626030 systemd[1]: Started session-1.scope. May 17 00:49:22.626222 systemd-logind[1574]: New session 1 of user core. May 17 00:49:27.108418 waagent[1697]: 2025-05-17T00:49:27.108304Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:49:27.115291 waagent[1697]: 2025-05-17T00:49:27.115204Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:49:27.121464 waagent[1697]: 2025-05-17T00:49:27.121385Z INFO Daemon Daemon Python: 3.9.16 May 17 00:49:27.129680 waagent[1697]: 2025-05-17T00:49:27.129592Z INFO Daemon Daemon Run daemon May 17 00:49:27.135315 waagent[1697]: 2025-05-17T00:49:27.135238Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:49:27.153251 waagent[1697]: 2025-05-17T00:49:27.153120Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:49:27.168343 waagent[1697]: 2025-05-17T00:49:27.168210Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:49:27.178741 waagent[1697]: 2025-05-17T00:49:27.178652Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:49:27.183792 waagent[1697]: 2025-05-17T00:49:27.183722Z INFO Daemon Daemon Using waagent for provisioning May 17 00:49:27.189444 waagent[1697]: 2025-05-17T00:49:27.189381Z INFO Daemon Daemon Activate resource disk May 17 00:49:27.194398 waagent[1697]: 2025-05-17T00:49:27.194330Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:49:27.208939 waagent[1697]: 2025-05-17T00:49:27.208859Z INFO Daemon Daemon Found device: None May 17 00:49:27.214446 waagent[1697]: 2025-05-17T00:49:27.214360Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:49:27.225863 waagent[1697]: 2025-05-17T00:49:27.225772Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:49:27.238064 waagent[1697]: 2025-05-17T00:49:27.237987Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:49:27.244320 waagent[1697]: 2025-05-17T00:49:27.244241Z INFO Daemon Daemon Running default provisioning handler May 17 00:49:27.257563 waagent[1697]: 2025-05-17T00:49:27.257403Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:49:27.272795 waagent[1697]: 2025-05-17T00:49:27.272653Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:49:27.282575 waagent[1697]: 2025-05-17T00:49:27.282465Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:49:27.288567 waagent[1697]: 2025-05-17T00:49:27.288468Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:49:27.396798 waagent[1697]: 2025-05-17T00:49:27.396603Z INFO Daemon Daemon Successfully mounted dvd May 17 00:49:27.477792 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:49:27.511731 waagent[1697]: 2025-05-17T00:49:27.511585Z INFO Daemon Daemon Detect protocol endpoint May 17 00:49:27.517033 waagent[1697]: 2025-05-17T00:49:27.516955Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:49:27.523180 waagent[1697]: 2025-05-17T00:49:27.523107Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:49:27.530277 waagent[1697]: 2025-05-17T00:49:27.530204Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:49:27.536144 waagent[1697]: 2025-05-17T00:49:27.536081Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:49:27.542320 waagent[1697]: 2025-05-17T00:49:27.542252Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:49:27.700188 waagent[1697]: 2025-05-17T00:49:27.700065Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:49:27.707098 waagent[1697]: 2025-05-17T00:49:27.707054Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:49:27.712590 waagent[1697]: 2025-05-17T00:49:27.712526Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:49:28.479151 waagent[1697]: 2025-05-17T00:49:28.478989Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:49:28.494414 waagent[1697]: 2025-05-17T00:49:28.494341Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:49:28.500547 waagent[1697]: 2025-05-17T00:49:28.500468Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:49:28.590518 waagent[1697]: 2025-05-17T00:49:28.590347Z INFO Daemon Daemon Found private key matching thumbprint F9148DF9CEAEC03AED1EBBA4BDFCAB2AB54E5A47 May 17 00:49:28.600086 waagent[1697]: 2025-05-17T00:49:28.600000Z INFO Daemon Daemon Certificate with thumbprint A2E6F5CA009D4F8010A3F8124AC5E634345F9B51 has no matching private key. May 17 00:49:28.610111 waagent[1697]: 2025-05-17T00:49:28.610031Z INFO Daemon Daemon Fetch goal state completed May 17 00:49:28.813549 waagent[1697]: 2025-05-17T00:49:28.813473Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 87a3d1b4-b883-4830-8316-0b378da8ef06 New eTag: 9969923238317451572] May 17 00:49:28.824234 waagent[1697]: 2025-05-17T00:49:28.824151Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:49:28.840448 waagent[1697]: 2025-05-17T00:49:28.840385Z INFO Daemon Daemon Starting provisioning May 17 00:49:28.846619 waagent[1697]: 2025-05-17T00:49:28.846527Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:49:28.851893 waagent[1697]: 2025-05-17T00:49:28.851815Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-e6f3637a46] May 17 00:49:28.899569 waagent[1697]: 2025-05-17T00:49:28.899414Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-e6f3637a46] May 17 00:49:28.906201 waagent[1697]: 2025-05-17T00:49:28.906109Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:49:28.912943 waagent[1697]: 2025-05-17T00:49:28.912866Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:49:28.929281 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:49:28.929522 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:49:28.929578 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:49:28.929759 systemd[1]: Stopping systemd-networkd.service... May 17 00:49:28.935537 systemd-networkd[1305]: eth0: DHCPv6 lease lost May 17 00:49:28.937173 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:49:28.937423 systemd[1]: Stopped systemd-networkd.service. May 17 00:49:28.939371 systemd[1]: Starting systemd-networkd.service... May 17 00:49:28.972877 systemd-networkd[1757]: enP52112s1: Link UP May 17 00:49:28.973144 systemd-networkd[1757]: enP52112s1: Gained carrier May 17 00:49:28.974285 systemd-networkd[1757]: eth0: Link UP May 17 00:49:28.974384 systemd-networkd[1757]: eth0: Gained carrier May 17 00:49:28.974813 systemd-networkd[1757]: lo: Link UP May 17 00:49:28.974879 systemd-networkd[1757]: lo: Gained carrier May 17 00:49:28.975176 systemd-networkd[1757]: eth0: Gained IPv6LL May 17 00:49:28.975447 systemd-networkd[1757]: Enumeration completed May 17 00:49:28.976234 systemd[1]: Started systemd-networkd.service. May 17 00:49:28.977188 systemd-networkd[1757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:49:28.978280 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:49:28.981031 waagent[1697]: 2025-05-17T00:49:28.980849Z INFO Daemon Daemon Create user account if not exists May 17 00:49:28.987273 waagent[1697]: 2025-05-17T00:49:28.987187Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:49:28.993628 waagent[1697]: 2025-05-17T00:49:28.993542Z INFO Daemon Daemon Configure sudoer May 17 00:49:28.998376 waagent[1697]: 2025-05-17T00:49:28.998304Z INFO Daemon Daemon Configure sshd May 17 00:49:28.998603 systemd-networkd[1757]: eth0: DHCPv4 address 10.200.20.24/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:49:29.002962 waagent[1697]: 2025-05-17T00:49:29.002794Z INFO Daemon Daemon Deploy ssh public key. May 17 00:49:29.007682 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:49:30.173034 waagent[1697]: 2025-05-17T00:49:30.172964Z INFO Daemon Daemon Provisioning complete May 17 00:49:30.193162 waagent[1697]: 2025-05-17T00:49:30.193097Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:49:30.199368 waagent[1697]: 2025-05-17T00:49:30.199295Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:49:30.210017 waagent[1697]: 2025-05-17T00:49:30.209940Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:49:30.237987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:49:30.238150 systemd[1]: Stopped kubelet.service. May 17 00:49:30.239604 systemd[1]: Starting kubelet.service... May 17 00:49:30.347836 systemd[1]: Started kubelet.service. May 17 00:49:30.472138 kubelet[1775]: E0517 00:49:30.472046 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:30.474593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:30.474731 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:30.538558 waagent[1767]: 2025-05-17T00:49:30.538408Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:49:30.539382 waagent[1767]: 2025-05-17T00:49:30.539312Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:30.539533 waagent[1767]: 2025-05-17T00:49:30.539466Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:30.552298 waagent[1767]: 2025-05-17T00:49:30.552210Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:49:30.552498 waagent[1767]: 2025-05-17T00:49:30.552441Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:49:30.629393 waagent[1767]: 2025-05-17T00:49:30.629240Z INFO ExtHandler ExtHandler Found private key matching thumbprint F9148DF9CEAEC03AED1EBBA4BDFCAB2AB54E5A47 May 17 00:49:30.629648 waagent[1767]: 2025-05-17T00:49:30.629592Z INFO ExtHandler ExtHandler Certificate with thumbprint A2E6F5CA009D4F8010A3F8124AC5E634345F9B51 has no matching private key. May 17 00:49:30.629892 waagent[1767]: 2025-05-17T00:49:30.629841Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:49:30.644564 waagent[1767]: 2025-05-17T00:49:30.644503Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 7564cffb-5120-45a0-83d2-a7437186b2a0 New eTag: 9969923238317451572] May 17 00:49:30.645131 waagent[1767]: 2025-05-17T00:49:30.645068Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:49:30.847921 waagent[1767]: 2025-05-17T00:49:30.847772Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:49:30.871936 waagent[1767]: 2025-05-17T00:49:30.871845Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1767 May 17 00:49:30.875743 waagent[1767]: 2025-05-17T00:49:30.875671Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:49:30.877108 waagent[1767]: 2025-05-17T00:49:30.877048Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:49:30.964794 waagent[1767]: 2025-05-17T00:49:30.964731Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:49:30.965195 waagent[1767]: 2025-05-17T00:49:30.965135Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:49:30.973364 waagent[1767]: 2025-05-17T00:49:30.973298Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:49:30.973928 waagent[1767]: 2025-05-17T00:49:30.973868Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:49:30.975142 waagent[1767]: 2025-05-17T00:49:30.975075Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:49:30.976561 waagent[1767]: 2025-05-17T00:49:30.976467Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:49:30.977194 waagent[1767]: 2025-05-17T00:49:30.977133Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:30.977457 waagent[1767]: 2025-05-17T00:49:30.977406Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:30.978168 waagent[1767]: 2025-05-17T00:49:30.978109Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:49:30.978578 waagent[1767]: 2025-05-17T00:49:30.978520Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:49:30.978578 waagent[1767]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:49:30.978578 waagent[1767]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:49:30.978578 waagent[1767]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:49:30.978578 waagent[1767]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:30.978578 waagent[1767]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:30.978578 waagent[1767]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:30.981068 waagent[1767]: 2025-05-17T00:49:30.980898Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:49:30.982007 waagent[1767]: 2025-05-17T00:49:30.981941Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:30.982298 waagent[1767]: 2025-05-17T00:49:30.982245Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:30.982720 waagent[1767]: 2025-05-17T00:49:30.982647Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:49:30.983234 waagent[1767]: 2025-05-17T00:49:30.983171Z INFO EnvHandler ExtHandler Configure routes May 17 00:49:30.983327 waagent[1767]: 2025-05-17T00:49:30.983271Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:49:30.983731 waagent[1767]: 2025-05-17T00:49:30.983662Z INFO EnvHandler ExtHandler Gateway:None May 17 00:49:30.984307 waagent[1767]: 2025-05-17T00:49:30.984189Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:49:30.984388 waagent[1767]: 2025-05-17T00:49:30.984322Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:49:30.984840 waagent[1767]: 2025-05-17T00:49:30.984760Z INFO EnvHandler ExtHandler Routes:None May 17 00:49:30.987177 waagent[1767]: 2025-05-17T00:49:30.987123Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:49:30.998069 waagent[1767]: 2025-05-17T00:49:30.997986Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:49:30.998761 waagent[1767]: 2025-05-17T00:49:30.998701Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:49:30.999746 waagent[1767]: 2025-05-17T00:49:30.999682Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:49:31.025971 waagent[1767]: 2025-05-17T00:49:31.025862Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1757' May 17 00:49:31.077316 waagent[1767]: 2025-05-17T00:49:31.077230Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:49:31.128529 waagent[1767]: 2025-05-17T00:49:31.128335Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:49:31.128529 waagent[1767]: Executing ['ip', '-a', '-o', 'link']: May 17 00:49:31.128529 waagent[1767]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:49:31.128529 waagent[1767]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:91:86 brd ff:ff:ff:ff:ff:ff May 17 00:49:31.128529 waagent[1767]: 3: enP52112s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:91:86 brd ff:ff:ff:ff:ff:ff\ altname enP52112p0s2 May 17 00:49:31.128529 waagent[1767]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:49:31.128529 waagent[1767]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:49:31.128529 waagent[1767]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:49:31.128529 waagent[1767]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:49:31.128529 waagent[1767]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:49:31.128529 waagent[1767]: 2: eth0 inet6 fe80::20d:3aff:fefc:9186/64 scope link \ valid_lft forever preferred_lft forever May 17 00:49:31.506005 waagent[1767]: 2025-05-17T00:49:31.505878Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules May 17 00:49:31.513853 waagent[1767]: 2025-05-17T00:49:31.513726Z INFO EnvHandler ExtHandler Firewall rules: May 17 00:49:31.513853 waagent[1767]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:31.513853 waagent[1767]: pkts bytes target prot opt in out source destination May 17 00:49:31.513853 waagent[1767]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:31.513853 waagent[1767]: pkts bytes target prot opt in out source destination May 17 00:49:31.513853 waagent[1767]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:31.513853 waagent[1767]: pkts bytes target prot opt in out source destination May 17 00:49:31.513853 waagent[1767]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:49:31.513853 waagent[1767]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:49:31.516011 waagent[1767]: 2025-05-17T00:49:31.515955Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:49:31.523604 waagent[1767]: 2025-05-17T00:49:31.523544Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:49:31.772745 systemd[1]: Created slice system-sshd.slice. May 17 00:49:31.773941 systemd[1]: Started sshd@0-10.200.20.24:22-10.200.16.10:49094.service. May 17 00:49:32.215572 waagent[1697]: 2025-05-17T00:49:32.214432Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:49:32.219216 waagent[1697]: 2025-05-17T00:49:32.219157Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:49:32.460894 sshd[1827]: Accepted publickey for core from 10.200.16.10 port 49094 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:32.480341 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:32.484610 systemd[1]: Started session-3.scope. May 17 00:49:32.485453 systemd-logind[1574]: New session 3 of user core. May 17 00:49:32.894020 systemd[1]: Started sshd@1-10.200.20.24:22-10.200.16.10:49104.service. May 17 00:49:33.349158 sshd[1835]: Accepted publickey for core from 10.200.16.10 port 49104 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:33.350926 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:33.354956 systemd-logind[1574]: New session 4 of user core. May 17 00:49:33.355326 systemd[1]: Started session-4.scope. May 17 00:49:33.535683 waagent[1829]: 2025-05-17T00:49:33.535579Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:49:33.537215 waagent[1829]: 2025-05-17T00:49:33.537147Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:49:33.537349 waagent[1829]: 2025-05-17T00:49:33.537303Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:49:33.537478 waagent[1829]: 2025-05-17T00:49:33.537435Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 17 00:49:33.552169 waagent[1829]: 2025-05-17T00:49:33.552027Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:49:33.552661 waagent[1829]: 2025-05-17T00:49:33.552602Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:33.553023 waagent[1829]: 2025-05-17T00:49:33.552880Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:33.553421 waagent[1829]: 2025-05-17T00:49:33.553338Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:49:33.568092 waagent[1829]: 2025-05-17T00:49:33.568001Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:49:33.580841 waagent[1829]: 2025-05-17T00:49:33.580778Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 17 00:49:33.581971 waagent[1829]: 2025-05-17T00:49:33.581911Z INFO ExtHandler May 17 00:49:33.582120 waagent[1829]: 2025-05-17T00:49:33.582072Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8355b082-b229-49aa-91cb-4d602bc69338 eTag: 9969923238317451572 source: Fabric] May 17 00:49:33.582888 waagent[1829]: 2025-05-17T00:49:33.582830Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:49:33.584139 waagent[1829]: 2025-05-17T00:49:33.584077Z INFO ExtHandler May 17 00:49:33.584274 waagent[1829]: 2025-05-17T00:49:33.584228Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:49:33.592474 waagent[1829]: 2025-05-17T00:49:33.592420Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:49:33.593020 waagent[1829]: 2025-05-17T00:49:33.592967Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:49:33.616257 waagent[1829]: 2025-05-17T00:49:33.615631Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:49:33.676257 sshd[1835]: pam_unix(sshd:session): session closed for user core May 17 00:49:33.679576 systemd[1]: sshd@1-10.200.20.24:22-10.200.16.10:49104.service: Deactivated successfully. May 17 00:49:33.680513 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:49:33.680540 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. May 17 00:49:33.681735 systemd-logind[1574]: Removed session 4. May 17 00:49:33.708331 waagent[1829]: 2025-05-17T00:49:33.708180Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F9148DF9CEAEC03AED1EBBA4BDFCAB2AB54E5A47', 'hasPrivateKey': True} May 17 00:49:33.709474 waagent[1829]: 2025-05-17T00:49:33.709410Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A2E6F5CA009D4F8010A3F8124AC5E634345F9B51', 'hasPrivateKey': False} May 17 00:49:33.710578 waagent[1829]: 2025-05-17T00:49:33.710515Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:49:33.711445 waagent[1829]: 2025-05-17T00:49:33.711384Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:49:33.732193 waagent[1829]: 2025-05-17T00:49:33.732076Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:49:33.740768 waagent[1829]: 2025-05-17T00:49:33.740650Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:49:33.744568 waagent[1829]: 2025-05-17T00:49:33.744426Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:49:33.744789 waagent[1829]: 2025-05-17T00:49:33.744734Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:49:33.757929 systemd[1]: Started sshd@2-10.200.20.24:22-10.200.16.10:49112.service. May 17 00:49:33.789106 waagent[1829]: 2025-05-17T00:49:33.788963Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: May 17 00:49:33.789106 waagent[1829]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.789106 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.789106 waagent[1829]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.789106 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.789106 waagent[1829]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.789106 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.789106 waagent[1829]: 55 7859 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:49:33.789106 waagent[1829]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:49:33.790263 waagent[1829]: 2025-05-17T00:49:33.790196Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:49:33.793163 waagent[1829]: 2025-05-17T00:49:33.793033Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:49:33.793439 waagent[1829]: 2025-05-17T00:49:33.793385Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:49:33.793876 waagent[1829]: 2025-05-17T00:49:33.793813Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:49:33.802709 waagent[1829]: 2025-05-17T00:49:33.802645Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:49:33.803272 waagent[1829]: 2025-05-17T00:49:33.803214Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:49:33.811843 waagent[1829]: 2025-05-17T00:49:33.811773Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1829 May 17 00:49:33.815088 waagent[1829]: 2025-05-17T00:49:33.815024Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:49:33.815978 waagent[1829]: 2025-05-17T00:49:33.815914Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:49:33.816937 waagent[1829]: 2025-05-17T00:49:33.816875Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:49:33.819827 waagent[1829]: 2025-05-17T00:49:33.819760Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:49:33.821224 waagent[1829]: 2025-05-17T00:49:33.821151Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:49:33.821866 waagent[1829]: 2025-05-17T00:49:33.821806Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:33.822134 waagent[1829]: 2025-05-17T00:49:33.822085Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:33.822806 waagent[1829]: 2025-05-17T00:49:33.822740Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:49:33.823336 waagent[1829]: 2025-05-17T00:49:33.823265Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:33.823544 waagent[1829]: 2025-05-17T00:49:33.823467Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:33.823839 waagent[1829]: 2025-05-17T00:49:33.823778Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:49:33.824517 waagent[1829]: 2025-05-17T00:49:33.824364Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:49:33.824588 waagent[1829]: 2025-05-17T00:49:33.824529Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:49:33.824809 waagent[1829]: 2025-05-17T00:49:33.824740Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:49:33.824809 waagent[1829]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:49:33.824809 waagent[1829]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:49:33.824809 waagent[1829]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:49:33.824809 waagent[1829]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:33.824809 waagent[1829]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:33.824809 waagent[1829]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:33.825726 waagent[1829]: 2025-05-17T00:49:33.825633Z INFO EnvHandler ExtHandler Configure routes May 17 00:49:33.826375 waagent[1829]: 2025-05-17T00:49:33.826307Z INFO EnvHandler ExtHandler Gateway:None May 17 00:49:33.828680 waagent[1829]: 2025-05-17T00:49:33.828478Z INFO EnvHandler ExtHandler Routes:None May 17 00:49:33.829694 waagent[1829]: 2025-05-17T00:49:33.829612Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:49:33.829832 waagent[1829]: 2025-05-17T00:49:33.829775Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:49:33.834040 waagent[1829]: 2025-05-17T00:49:33.833655Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:49:33.844862 waagent[1829]: 2025-05-17T00:49:33.843407Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:49:33.855755 waagent[1829]: 2025-05-17T00:49:33.855679Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:49:33.855755 waagent[1829]: Executing ['ip', '-a', '-o', 'link']: May 17 00:49:33.855755 waagent[1829]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:49:33.855755 waagent[1829]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:91:86 brd ff:ff:ff:ff:ff:ff May 17 00:49:33.855755 waagent[1829]: 3: enP52112s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:91:86 brd ff:ff:ff:ff:ff:ff\ altname enP52112p0s2 May 17 00:49:33.855755 waagent[1829]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:49:33.855755 waagent[1829]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:49:33.855755 waagent[1829]: 2: eth0 inet 10.200.20.24/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:49:33.855755 waagent[1829]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:49:33.855755 waagent[1829]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:49:33.855755 waagent[1829]: 2: eth0 inet6 fe80::20d:3aff:fefc:9186/64 scope link \ valid_lft forever preferred_lft forever May 17 00:49:33.856798 waagent[1829]: 2025-05-17T00:49:33.856728Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:49:33.885496 waagent[1829]: 2025-05-17T00:49:33.885350Z INFO ExtHandler ExtHandler May 17 00:49:33.886811 waagent[1829]: 2025-05-17T00:49:33.886743Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dc4520f5-3ddc-429c-a9bb-bc58c271070f correlation b22e1181-59e1-4576-b6f8-99c02dd06fe2 created: 2025-05-17T00:47:44.936958Z] May 17 00:49:33.887866 waagent[1829]: 2025-05-17T00:49:33.887807Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:49:33.889913 waagent[1829]: 2025-05-17T00:49:33.889857Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] May 17 00:49:33.912742 waagent[1829]: 2025-05-17T00:49:33.912677Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:49:33.922735 waagent[1829]: 2025-05-17T00:49:33.922533Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E718BB45-4BBE-4331-8031-EC8BF4503789;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:49:33.932021 waagent[1829]: 2025-05-17T00:49:33.931925Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: May 17 00:49:33.932021 waagent[1829]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.932021 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.932021 waagent[1829]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.932021 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.932021 waagent[1829]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.932021 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.932021 waagent[1829]: 84 14328 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:49:33.932021 waagent[1829]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:49:33.998017 waagent[1829]: 2025-05-17T00:49:33.997892Z INFO EnvHandler ExtHandler The firewall was setup successfully: May 17 00:49:33.998017 waagent[1829]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.998017 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.998017 waagent[1829]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.998017 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.998017 waagent[1829]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:33.998017 waagent[1829]: pkts bytes target prot opt in out source destination May 17 00:49:33.998017 waagent[1829]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:49:33.998017 waagent[1829]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:49:33.998017 waagent[1829]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:49:34.242980 sshd[1857]: Accepted publickey for core from 10.200.16.10 port 49112 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:34.244323 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:34.248518 systemd[1]: Started session-5.scope. May 17 00:49:34.248886 systemd-logind[1574]: New session 5 of user core. May 17 00:49:34.602798 sshd[1857]: pam_unix(sshd:session): session closed for user core May 17 00:49:34.606043 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. May 17 00:49:34.606192 systemd[1]: sshd@2-10.200.20.24:22-10.200.16.10:49112.service: Deactivated successfully. May 17 00:49:34.606928 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:49:34.607336 systemd-logind[1574]: Removed session 5. May 17 00:49:34.680849 systemd[1]: Started sshd@3-10.200.20.24:22-10.200.16.10:49124.service. May 17 00:49:35.163068 sshd[1894]: Accepted publickey for core from 10.200.16.10 port 49124 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:35.164368 sshd[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:35.168310 systemd-logind[1574]: New session 6 of user core. May 17 00:49:35.168749 systemd[1]: Started session-6.scope. May 17 00:49:35.511214 sshd[1894]: pam_unix(sshd:session): session closed for user core May 17 00:49:35.513954 systemd[1]: sshd@3-10.200.20.24:22-10.200.16.10:49124.service: Deactivated successfully. May 17 00:49:35.514894 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:49:35.514921 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. May 17 00:49:35.515954 systemd-logind[1574]: Removed session 6. May 17 00:49:35.584735 systemd[1]: Started sshd@4-10.200.20.24:22-10.200.16.10:49134.service. May 17 00:49:36.040193 sshd[1901]: Accepted publickey for core from 10.200.16.10 port 49134 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:36.041832 sshd[1901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:36.046088 systemd[1]: Started session-7.scope. May 17 00:49:36.046583 systemd-logind[1574]: New session 7 of user core. May 17 00:49:36.618034 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:49:36.618589 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:49:36.662383 dbus-daemon[1557]: avc: received setenforce notice (enforcing=1) May 17 00:49:36.662747 sudo[1905]: pam_unix(sudo:session): session closed for user root May 17 00:49:36.748724 sshd[1901]: pam_unix(sshd:session): session closed for user core May 17 00:49:36.751737 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. May 17 00:49:36.751944 systemd[1]: sshd@4-10.200.20.24:22-10.200.16.10:49134.service: Deactivated successfully. May 17 00:49:36.752734 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:49:36.753212 systemd-logind[1574]: Removed session 7. May 17 00:49:36.821742 systemd[1]: Started sshd@5-10.200.20.24:22-10.200.16.10:49142.service. May 17 00:49:37.278713 sshd[1909]: Accepted publickey for core from 10.200.16.10 port 49142 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:37.280396 sshd[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:37.284176 systemd-logind[1574]: New session 8 of user core. May 17 00:49:37.284645 systemd[1]: Started session-8.scope. May 17 00:49:37.534978 sudo[1914]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:49:37.535740 sudo[1914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:49:37.538514 sudo[1914]: pam_unix(sudo:session): session closed for user root May 17 00:49:37.543093 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:49:37.543308 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:49:37.552035 systemd[1]: Stopping audit-rules.service... May 17 00:49:37.551000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:49:37.557348 kernel: kauditd_printk_skb: 15 callbacks suppressed May 17 00:49:37.557426 kernel: audit: type=1305 audit(1747442977.551:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 00:49:37.557709 auditctl[1917]: No rules May 17 00:49:37.558253 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:49:37.558688 systemd[1]: Stopped audit-rules.service. May 17 00:49:37.561135 systemd[1]: Starting audit-rules.service... May 17 00:49:37.551000 audit[1917]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8577200 a2=420 a3=0 items=0 ppid=1 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:37.594742 kernel: audit: type=1300 audit(1747442977.551:165): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8577200 a2=420 a3=0 items=0 ppid=1 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:37.551000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 00:49:37.601684 kernel: audit: type=1327 audit(1747442977.551:165): proctitle=2F7362696E2F617564697463746C002D44 May 17 00:49:37.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:37.610315 augenrules[1935]: No rules May 17 00:49:37.611397 systemd[1]: Finished audit-rules.service. May 17 00:49:37.618222 kernel: audit: type=1131 audit(1747442977.556:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:37.619598 sudo[1913]: pam_unix(sudo:session): session closed for user root May 17 00:49:37.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:37.636303 kernel: audit: type=1130 audit(1747442977.606:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:37.618000 audit[1913]: USER_END pid=1913 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:37.655923 kernel: audit: type=1106 audit(1747442977.618:168): pid=1913 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:37.655994 kernel: audit: type=1104 audit(1747442977.618:169): pid=1913 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:37.618000 audit[1913]: CRED_DISP pid=1913 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:37.690628 sshd[1909]: pam_unix(sshd:session): session closed for user core May 17 00:49:37.690000 audit[1909]: USER_END pid=1909 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:37.690000 audit[1909]: CRED_DISP pid=1909 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:37.716070 systemd[1]: sshd@5-10.200.20.24:22-10.200.16.10:49142.service: Deactivated successfully. May 17 00:49:37.716838 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:49:37.718202 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. May 17 00:49:37.719111 systemd-logind[1574]: Removed session 8. May 17 00:49:37.733751 kernel: audit: type=1106 audit(1747442977.690:170): pid=1909 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:37.733824 kernel: audit: type=1104 audit(1747442977.690:171): pid=1909 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:37.733888 kernel: audit: type=1131 audit(1747442977.714:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.24:22-10.200.16.10:49142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:37.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.24:22-10.200.16.10:49142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:37.775586 systemd[1]: Started sshd@6-10.200.20.24:22-10.200.16.10:49156.service. May 17 00:49:37.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.24:22-10.200.16.10:49156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:38.254000 audit[1942]: USER_ACCT pid=1942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:38.256581 sshd[1942]: Accepted publickey for core from 10.200.16.10 port 49156 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:38.256000 audit[1942]: CRED_ACQ pid=1942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:38.256000 audit[1942]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff523680 a2=3 a3=1 items=0 ppid=1 pid=1942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:38.256000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:49:38.258210 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:38.262383 systemd[1]: Started session-9.scope. May 17 00:49:38.262713 systemd-logind[1574]: New session 9 of user core. May 17 00:49:38.265000 audit[1942]: USER_START pid=1942 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:38.266000 audit[1945]: CRED_ACQ pid=1945 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:49:38.524000 audit[1946]: USER_ACCT pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:38.526467 sudo[1946]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:49:38.525000 audit[1946]: CRED_REFR pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:38.527096 sudo[1946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:49:38.527000 audit[1946]: USER_START pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:49:38.548373 systemd[1]: Starting docker.service... May 17 00:49:38.581009 env[1956]: time="2025-05-17T00:49:38.580961243Z" level=info msg="Starting up" May 17 00:49:38.591779 env[1956]: time="2025-05-17T00:49:38.591744875Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:49:38.591897 env[1956]: time="2025-05-17T00:49:38.591883073Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:49:38.591962 env[1956]: time="2025-05-17T00:49:38.591947312Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:49:38.592009 env[1956]: time="2025-05-17T00:49:38.591997551Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:49:38.593879 env[1956]: time="2025-05-17T00:49:38.593847202Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:49:38.593879 env[1956]: time="2025-05-17T00:49:38.593870362Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:49:38.593987 env[1956]: time="2025-05-17T00:49:38.593887681Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:49:38.593987 env[1956]: time="2025-05-17T00:49:38.593896841Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:49:38.599607 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2737023700-merged.mount: Deactivated successfully. May 17 00:49:38.989369 env[1956]: time="2025-05-17T00:49:38.989327105Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:49:38.989369 env[1956]: time="2025-05-17T00:49:38.989365184Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:49:38.989609 env[1956]: time="2025-05-17T00:49:38.989541022Z" level=info msg="Loading containers: start." May 17 00:49:39.122000 audit[1983]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.122000 audit[1983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc3423ab0 a2=0 a3=1 items=0 ppid=1956 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.122000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 00:49:39.123000 audit[1985]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.123000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd35a5620 a2=0 a3=1 items=0 ppid=1956 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.123000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 00:49:39.125000 audit[1987]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.125000 audit[1987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffffa5f9b0 a2=0 a3=1 items=0 ppid=1956 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.125000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:49:39.127000 audit[1989]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.127000 audit[1989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffec746fe0 a2=0 a3=1 items=0 ppid=1956 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.127000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:49:39.128000 audit[1991]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.128000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff3e86d70 a2=0 a3=1 items=0 ppid=1956 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.128000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 00:49:39.130000 audit[1993]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.130000 audit[1993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd0e936d0 a2=0 a3=1 items=0 ppid=1956 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.130000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 00:49:39.147000 audit[1995]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.147000 audit[1995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff9d660b0 a2=0 a3=1 items=0 ppid=1956 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.147000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 00:49:39.149000 audit[1997]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.149000 audit[1997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe82a5a10 a2=0 a3=1 items=0 ppid=1956 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.149000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 00:49:39.150000 audit[1999]: NETFILTER_CFG table=filter:17 family=2 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.150000 audit[1999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffdd3775e0 a2=0 a3=1 items=0 ppid=1956 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.150000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:49:39.168000 audit[2003]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_unregister_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.168000 audit[2003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffed90fe90 a2=0 a3=1 items=0 ppid=1956 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.168000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:49:39.171000 audit[2004]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.171000 audit[2004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd093db20 a2=0 a3=1 items=0 ppid=1956 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.171000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:49:39.216513 kernel: Initializing XFRM netlink socket May 17 00:49:39.238533 env[1956]: time="2025-05-17T00:49:39.238478805Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:49:39.358000 audit[2012]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.358000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffd6ed71d0 a2=0 a3=1 items=0 ppid=1956 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.358000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 00:49:39.395000 audit[2016]: NETFILTER_CFG table=nat:21 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.395000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff759b330 a2=0 a3=1 items=0 ppid=1956 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.395000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 00:49:39.398000 audit[2019]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.398000 audit[2019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd4d05ff0 a2=0 a3=1 items=0 ppid=1956 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.398000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 00:49:39.400000 audit[2021]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.400000 audit[2021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcaf0b400 a2=0 a3=1 items=0 ppid=1956 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.400000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 00:49:39.402000 audit[2023]: NETFILTER_CFG table=nat:24 family=2 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.402000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffdbb51b20 a2=0 a3=1 items=0 ppid=1956 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.402000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 00:49:39.404000 audit[2025]: NETFILTER_CFG table=nat:25 family=2 entries=2 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.404000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffffb3145c0 a2=0 a3=1 items=0 ppid=1956 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.404000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 00:49:39.405000 audit[2027]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.405000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffece48800 a2=0 a3=1 items=0 ppid=1956 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.405000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 00:49:39.407000 audit[2029]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.407000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffcf937820 a2=0 a3=1 items=0 ppid=1956 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 00:49:39.409000 audit[2031]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.409000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffd989fbe0 a2=0 a3=1 items=0 ppid=1956 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.409000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 00:49:39.410000 audit[2033]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.410000 audit[2033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe9c217f0 a2=0 a3=1 items=0 ppid=1956 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.410000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 00:49:39.412000 audit[2035]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.412000 audit[2035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffcfac6c0 a2=0 a3=1 items=0 ppid=1956 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 00:49:39.414471 systemd-networkd[1757]: docker0: Link UP May 17 00:49:39.430000 audit[2039]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_unregister_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.430000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc4edfcc0 a2=0 a3=1 items=0 ppid=1956 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.430000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 00:49:39.437000 audit[2040]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:49:39.437000 audit[2040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffee3a13a0 a2=0 a3=1 items=0 ppid=1956 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:39.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 00:49:39.439688 env[1956]: time="2025-05-17T00:49:39.439654739Z" level=info msg="Loading containers: done." May 17 00:49:39.451155 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck56116028-merged.mount: Deactivated successfully. May 17 00:49:39.501714 env[1956]: time="2025-05-17T00:49:39.501671551Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:49:39.501880 env[1956]: time="2025-05-17T00:49:39.501866628Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:49:39.501994 env[1956]: time="2025-05-17T00:49:39.501966387Z" level=info msg="Daemon has completed initialization" May 17 00:49:39.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:39.536330 systemd[1]: Started docker.service. May 17 00:49:39.540650 env[1956]: time="2025-05-17T00:49:39.540599621Z" level=info msg="API listen on /run/docker.sock" May 17 00:49:40.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:40.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:40.487982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:49:40.488148 systemd[1]: Stopped kubelet.service. May 17 00:49:40.490351 systemd[1]: Starting kubelet.service... May 17 00:49:40.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:40.585586 systemd[1]: Started kubelet.service. May 17 00:49:40.720710 kubelet[2080]: E0517 00:49:40.720670 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:40.723065 env[1585]: time="2025-05-17T00:49:40.722806569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:49:40.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:49:40.724102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:40.724235 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:41.758501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984282445.mount: Deactivated successfully. May 17 00:49:43.241279 env[1585]: time="2025-05-17T00:49:43.241236376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:43.250180 env[1585]: time="2025-05-17T00:49:43.250140795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:43.254061 env[1585]: time="2025-05-17T00:49:43.254020431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:43.258209 env[1585]: time="2025-05-17T00:49:43.258175744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:43.258948 env[1585]: time="2025-05-17T00:49:43.258920696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:49:43.260339 env[1585]: time="2025-05-17T00:49:43.260314520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:49:44.690850 env[1585]: time="2025-05-17T00:49:44.690794066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:44.699575 env[1585]: time="2025-05-17T00:49:44.699535334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:44.707394 env[1585]: time="2025-05-17T00:49:44.707355251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:44.713297 env[1585]: time="2025-05-17T00:49:44.713261348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:44.714251 env[1585]: time="2025-05-17T00:49:44.714224658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:49:44.715673 env[1585]: time="2025-05-17T00:49:44.715640443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:49:45.963440 env[1585]: time="2025-05-17T00:49:45.963393249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:45.970430 env[1585]: time="2025-05-17T00:49:45.970391579Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:45.976739 env[1585]: time="2025-05-17T00:49:45.976705036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:45.981698 env[1585]: time="2025-05-17T00:49:45.981669227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:45.982650 env[1585]: time="2025-05-17T00:49:45.982620218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:49:45.984052 env[1585]: time="2025-05-17T00:49:45.984029323Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:49:47.143842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621862968.mount: Deactivated successfully. May 17 00:49:47.632996 env[1585]: time="2025-05-17T00:49:47.632943432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:47.639168 env[1585]: time="2025-05-17T00:49:47.639122458Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:47.643570 env[1585]: time="2025-05-17T00:49:47.643531100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:47.647711 env[1585]: time="2025-05-17T00:49:47.647676304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:47.648210 env[1585]: time="2025-05-17T00:49:47.648178659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:49:47.648729 env[1585]: time="2025-05-17T00:49:47.648700855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:49:48.331811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165538882.mount: Deactivated successfully. May 17 00:49:49.853809 env[1585]: time="2025-05-17T00:49:49.853757513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:49.861686 env[1585]: time="2025-05-17T00:49:49.861617053Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:49.865872 env[1585]: time="2025-05-17T00:49:49.865834820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:49.871111 env[1585]: time="2025-05-17T00:49:49.871080900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:49.871934 env[1585]: time="2025-05-17T00:49:49.871903134Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:49:49.873116 env[1585]: time="2025-05-17T00:49:49.873086685Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:49:50.489523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255088906.mount: Deactivated successfully. May 17 00:49:50.522945 env[1585]: time="2025-05-17T00:49:50.522894985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:50.535858 env[1585]: time="2025-05-17T00:49:50.535816451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:50.543105 env[1585]: time="2025-05-17T00:49:50.543069119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:50.548790 env[1585]: time="2025-05-17T00:49:50.548759518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:50.549297 env[1585]: time="2025-05-17T00:49:50.549271875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:49:50.549838 env[1585]: time="2025-05-17T00:49:50.549806471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:49:50.737943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:49:50.738112 systemd[1]: Stopped kubelet.service. May 17 00:49:50.758543 kernel: kauditd_printk_skb: 88 callbacks suppressed May 17 00:49:50.758613 kernel: audit: type=1130 audit(1747442990.736:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:50.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:50.739749 systemd[1]: Starting kubelet.service... May 17 00:49:50.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:50.779004 kernel: audit: type=1131 audit(1747442990.736:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:50.843608 systemd[1]: Started kubelet.service. May 17 00:49:50.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:50.861524 kernel: audit: type=1130 audit(1747442990.842:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:50.962587 kubelet[2098]: E0517 00:49:50.962241 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:51.286633 kernel: audit: type=1131 audit(1747442990.963:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:49:50.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:49:50.964213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:50.964358 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:51.684347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199822898.mount: Deactivated successfully. May 17 00:49:56.477639 env[1585]: time="2025-05-17T00:49:56.477582065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:56.487718 env[1585]: time="2025-05-17T00:49:56.487679096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:56.492716 env[1585]: time="2025-05-17T00:49:56.492680991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:56.497902 env[1585]: time="2025-05-17T00:49:56.497871566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:56.498737 env[1585]: time="2025-05-17T00:49:56.498707122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:49:57.828510 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 17 00:50:00.987993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:50:00.988164 systemd[1]: Stopped kubelet.service. May 17 00:50:00.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:00.989744 systemd[1]: Starting kubelet.service... May 17 00:50:00.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.042027 kernel: audit: type=1130 audit(1747443000.987:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.042137 kernel: audit: type=1131 audit(1747443000.987:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.089178 systemd[1]: Started kubelet.service. May 17 00:50:01.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.123533 kernel: audit: type=1130 audit(1747443001.088:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.210205 kubelet[2130]: E0517 00:50:01.210162 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:50:01.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:50:01.212891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:50:01.213036 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:50:01.232518 kernel: audit: type=1131 audit(1747443001.212:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:50:01.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.902875 systemd[1]: Stopped kubelet.service. May 17 00:50:01.905150 systemd[1]: Starting kubelet.service... May 17 00:50:01.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.948636 kernel: audit: type=1130 audit(1747443001.902:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.948850 kernel: audit: type=1131 audit(1747443001.902:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:01.966946 systemd[1]: Reloading. May 17 00:50:02.051630 /usr/lib/systemd/system-generators/torcx-generator[2168]: time="2025-05-17T00:50:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:50:02.051660 /usr/lib/systemd/system-generators/torcx-generator[2168]: time="2025-05-17T00:50:02Z" level=info msg="torcx already run" May 17 00:50:02.142414 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:50:02.142757 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:50:02.160446 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:50:02.253994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:50:02.254063 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:50:02.254451 systemd[1]: Stopped kubelet.service. May 17 00:50:02.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:50:02.262739 systemd[1]: Starting kubelet.service... May 17 00:50:02.274535 kernel: audit: type=1130 audit(1747443002.254:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 00:50:02.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:02.446841 systemd[1]: Started kubelet.service. May 17 00:50:02.468573 kernel: audit: type=1130 audit(1747443002.448:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:02.492041 kubelet[2244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:02.492441 kubelet[2244]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:50:02.492519 kubelet[2244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:02.492697 kubelet[2244]: I0517 00:50:02.492669 2244 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:50:03.427280 kubelet[2244]: I0517 00:50:03.427238 2244 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:50:03.427280 kubelet[2244]: I0517 00:50:03.427272 2244 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:50:03.427892 kubelet[2244]: I0517 00:50:03.427869 2244 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:50:03.448545 kubelet[2244]: E0517 00:50:03.448512 2244 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:03.450900 kubelet[2244]: I0517 00:50:03.450879 2244 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:50:03.456465 kubelet[2244]: E0517 00:50:03.456421 2244 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:50:03.456465 kubelet[2244]: I0517 00:50:03.456458 2244 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:50:03.460318 kubelet[2244]: I0517 00:50:03.460296 2244 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:50:03.461249 kubelet[2244]: I0517 00:50:03.461227 2244 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:50:03.461391 kubelet[2244]: I0517 00:50:03.461360 2244 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:50:03.461591 kubelet[2244]: I0517 00:50:03.461393 2244 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-e6f3637a46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:50:03.461686 kubelet[2244]: I0517 00:50:03.461604 2244 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:50:03.461686 kubelet[2244]: I0517 00:50:03.461613 2244 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:50:03.461736 kubelet[2244]: I0517 00:50:03.461723 2244 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:03.464801 kubelet[2244]: I0517 00:50:03.464761 2244 kubelet.go:408] "Attempting to sync node with API server" May 17 00:50:03.464879 kubelet[2244]: I0517 00:50:03.464808 2244 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:50:03.464879 kubelet[2244]: I0517 00:50:03.464833 2244 kubelet.go:314] "Adding apiserver pod source" May 17 00:50:03.464879 kubelet[2244]: I0517 00:50:03.464848 2244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:50:03.467685 kubelet[2244]: W0517 00:50:03.467634 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-e6f3637a46&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:03.467819 kubelet[2244]: E0517 00:50:03.467802 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-e6f3637a46&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:03.470386 kubelet[2244]: W0517 00:50:03.470138 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:03.470386 kubelet[2244]: E0517 00:50:03.470183 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:03.470513 kubelet[2244]: I0517 00:50:03.470439 2244 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:50:03.470976 kubelet[2244]: I0517 00:50:03.470946 2244 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:50:03.471033 kubelet[2244]: W0517 00:50:03.471006 2244 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:50:03.471838 kubelet[2244]: I0517 00:50:03.471808 2244 server.go:1274] "Started kubelet" May 17 00:50:03.477000 audit[2244]: AVC avc: denied { mac_admin } for pid=2244 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:03.479357 kubelet[2244]: I0517 00:50:03.479304 2244 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:50:03.481258 kubelet[2244]: I0517 00:50:03.481240 2244 server.go:449] "Adding debug handlers to kubelet server" May 17 00:50:03.484058 kubelet[2244]: I0517 00:50:03.484000 2244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:50:03.484393 kubelet[2244]: I0517 00:50:03.484378 2244 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:50:03.488194 kubelet[2244]: I0517 00:50:03.488172 2244 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:50:03.488338 kubelet[2244]: I0517 00:50:03.488323 2244 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:50:03.488512 kubelet[2244]: I0517 00:50:03.488501 2244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:50:03.477000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:03.505844 kernel: audit: type=1400 audit(1747443003.477:223): avc: denied { mac_admin } for pid=2244 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:03.505936 kernel: audit: type=1401 audit(1747443003.477:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:03.477000 audit[2244]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400097aab0 a1=400089fa28 a2=400097aa80 a3=25 items=0 ppid=1 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:03.487000 audit[2244]: AVC avc: denied { mac_admin } for pid=2244 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:03.487000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:03.487000 audit[2244]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009030a0 a1=400089fa40 a2=400097ab40 a3=25 items=0 ppid=1 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.487000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:03.509997 kubelet[2244]: E0517 00:50:03.484627 2244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.24:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-e6f3637a46.18402a273f2d7753 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-e6f3637a46,UID:ci-3510.3.7-n-e6f3637a46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-e6f3637a46,},FirstTimestamp:2025-05-17 00:50:03.471787859 +0000 UTC m=+1.017441322,LastTimestamp:2025-05-17 00:50:03.471787859 +0000 UTC m=+1.017441322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-e6f3637a46,}" May 17 00:50:03.511010 kubelet[2244]: I0517 00:50:03.510766 2244 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:50:03.512149 kubelet[2244]: I0517 00:50:03.512120 2244 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:50:03.512368 kubelet[2244]: E0517 00:50:03.512334 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-e6f3637a46\" not found" May 17 00:50:03.511000 audit[2255]: NETFILTER_CFG table=mangle:33 family=2 entries=2 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.511000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff4accaf0 a2=0 a3=1 items=0 ppid=2244 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.511000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:50:03.512000 audit[2256]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.512000 audit[2256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9f5b460 a2=0 a3=1 items=0 ppid=2244 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:50:03.513665 kubelet[2244]: I0517 00:50:03.513639 2244 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:50:03.513736 kubelet[2244]: I0517 00:50:03.513706 2244 reconciler.go:26] "Reconciler: start to sync state" May 17 00:50:03.515529 kubelet[2244]: E0517 00:50:03.515456 2244 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:50:03.515631 kubelet[2244]: W0517 00:50:03.515587 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:03.515667 kubelet[2244]: E0517 00:50:03.515640 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:03.515725 kubelet[2244]: E0517 00:50:03.515699 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-e6f3637a46?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="200ms" May 17 00:50:03.515860 kubelet[2244]: I0517 00:50:03.515837 2244 factory.go:221] Registration of the systemd container factory successfully May 17 00:50:03.515934 kubelet[2244]: I0517 00:50:03.515913 2244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:50:03.516000 audit[2258]: NETFILTER_CFG table=filter:35 family=2 entries=2 op=nft_register_chain pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.516000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd98745b0 a2=0 a3=1 items=0 ppid=2244 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:50:03.518067 kubelet[2244]: I0517 00:50:03.518036 2244 factory.go:221] Registration of the containerd container factory successfully May 17 00:50:03.518000 audit[2260]: NETFILTER_CFG table=filter:36 family=2 entries=2 op=nft_register_chain pid=2260 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.518000 audit[2260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc81299f0 a2=0 a3=1 items=0 ppid=2244 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:50:03.599000 audit[2268]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.599000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd4aee4c0 a2=0 a3=1 items=0 ppid=2244 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 00:50:03.600948 kubelet[2244]: I0517 00:50:03.600547 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:50:03.600000 audit[2269]: NETFILTER_CFG table=mangle:38 family=10 entries=2 op=nft_register_chain pid=2269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:03.600000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe387c560 a2=0 a3=1 items=0 ppid=2244 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 00:50:03.602008 kubelet[2244]: I0517 00:50:03.601988 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:50:03.602095 kubelet[2244]: I0517 00:50:03.602086 2244 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:50:03.602169 kubelet[2244]: I0517 00:50:03.602160 2244 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:50:03.602274 kubelet[2244]: E0517 00:50:03.602259 2244 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:50:03.603115 kubelet[2244]: W0517 00:50:03.603078 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:03.603180 kubelet[2244]: E0517 00:50:03.603122 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:03.603000 audit[2271]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:03.603000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4ba5310 a2=0 a3=1 items=0 ppid=2244 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:50:03.603000 audit[2270]: NETFILTER_CFG table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.603000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd87bb630 a2=0 a3=1 items=0 ppid=2244 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 00:50:03.604000 audit[2273]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.604000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1efe7c0 a2=0 a3=1 items=0 ppid=2244 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:50:03.605000 audit[2274]: NETFILTER_CFG table=nat:42 family=10 entries=2 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:03.605000 audit[2274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd56d4e60 a2=0 a3=1 items=0 ppid=2244 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 00:50:03.606000 audit[2275]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:03.606000 audit[2275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb04f420 a2=0 a3=1 items=0 ppid=2244 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:50:03.607000 audit[2276]: NETFILTER_CFG table=filter:44 family=10 entries=2 op=nft_register_chain pid=2276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:03.607000 audit[2276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff22aab00 a2=0 a3=1 items=0 ppid=2244 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 00:50:03.612989 kubelet[2244]: E0517 00:50:03.612962 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-e6f3637a46\" not found" May 17 00:50:03.703477 kubelet[2244]: E0517 00:50:03.703388 2244 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:50:03.713613 kubelet[2244]: E0517 00:50:03.713571 2244 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-e6f3637a46\" not found" May 17 00:50:03.716084 kubelet[2244]: E0517 00:50:03.716042 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-e6f3637a46?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="400ms" May 17 00:50:03.738816 kubelet[2244]: I0517 00:50:03.738750 2244 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:50:03.738993 kubelet[2244]: I0517 00:50:03.738982 2244 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:50:03.739088 kubelet[2244]: I0517 00:50:03.739077 2244 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:03.745362 kubelet[2244]: I0517 00:50:03.745339 2244 policy_none.go:49] "None policy: Start" May 17 00:50:03.746206 kubelet[2244]: I0517 00:50:03.746190 2244 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:50:03.746344 kubelet[2244]: I0517 00:50:03.746334 2244 state_mem.go:35] "Initializing new in-memory state store" May 17 00:50:03.754860 kubelet[2244]: I0517 00:50:03.754821 2244 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:50:03.754000 audit[2244]: AVC avc: denied { mac_admin } for pid=2244 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:03.754000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:03.754000 audit[2244]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ec6ab0 a1=4000eb8dc8 a2=4000ec6a80 a3=25 items=0 ppid=1 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:03.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:03.755215 kubelet[2244]: I0517 00:50:03.755122 2244 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:50:03.755248 kubelet[2244]: I0517 00:50:03.755233 2244 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:50:03.755271 kubelet[2244]: I0517 00:50:03.755245 2244 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:50:03.756593 kubelet[2244]: I0517 00:50:03.756572 2244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:50:03.760946 kubelet[2244]: E0517 00:50:03.760921 2244 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-e6f3637a46\" not found" May 17 00:50:03.856892 kubelet[2244]: I0517 00:50:03.856865 2244 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.857441 kubelet[2244]: E0517 00:50:03.857399 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916283 kubelet[2244]: I0517 00:50:03.916232 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38334319291e09c61745efeca1a4902e-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-e6f3637a46\" (UID: \"38334319291e09c61745efeca1a4902e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916283 kubelet[2244]: I0517 00:50:03.916284 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916436 kubelet[2244]: I0517 00:50:03.916305 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916436 kubelet[2244]: I0517 00:50:03.916323 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916436 kubelet[2244]: I0517 00:50:03.916358 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6f468545100581dbd6329df57d02068-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-e6f3637a46\" (UID: \"d6f468545100581dbd6329df57d02068\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916436 kubelet[2244]: I0517 00:50:03.916376 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38334319291e09c61745efeca1a4902e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-e6f3637a46\" (UID: \"38334319291e09c61745efeca1a4902e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916436 kubelet[2244]: I0517 00:50:03.916392 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38334319291e09c61745efeca1a4902e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-e6f3637a46\" (UID: \"38334319291e09c61745efeca1a4902e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916595 kubelet[2244]: I0517 00:50:03.916420 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:03.916595 kubelet[2244]: I0517 00:50:03.916437 2244 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:04.059935 kubelet[2244]: I0517 00:50:04.059896 2244 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:04.060424 kubelet[2244]: E0517 00:50:04.060398 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:04.116757 kubelet[2244]: E0517 00:50:04.116702 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-e6f3637a46?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="800ms" May 17 00:50:04.210941 env[1585]: time="2025-05-17T00:50:04.210879440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-e6f3637a46,Uid:21e160a5c31c8696f8b13d0b05d45b7e,Namespace:kube-system,Attempt:0,}" May 17 00:50:04.211565 env[1585]: time="2025-05-17T00:50:04.211439359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-e6f3637a46,Uid:38334319291e09c61745efeca1a4902e,Namespace:kube-system,Attempt:0,}" May 17 00:50:04.215772 env[1585]: time="2025-05-17T00:50:04.215730946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-e6f3637a46,Uid:d6f468545100581dbd6329df57d02068,Namespace:kube-system,Attempt:0,}" May 17 00:50:04.462282 kubelet[2244]: I0517 00:50:04.462164 2244 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:04.462744 kubelet[2244]: E0517 00:50:04.462710 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:04.602231 update_engine[1578]: I0517 00:50:04.602023 1578 update_attempter.cc:509] Updating boot flags... May 17 00:50:04.690046 kubelet[2244]: W0517 00:50:04.689942 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:04.690046 kubelet[2244]: E0517 00:50:04.690010 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:04.780295 kubelet[2244]: W0517 00:50:04.780191 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-e6f3637a46&limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:04.780295 kubelet[2244]: E0517 00:50:04.780260 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-e6f3637a46&limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:04.871609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878307502.mount: Deactivated successfully. May 17 00:50:04.910018 env[1585]: time="2025-05-17T00:50:04.909963201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.916762 env[1585]: time="2025-05-17T00:50:04.916719502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.917391 kubelet[2244]: E0517 00:50:04.917338 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-e6f3637a46?timeout=10s\": dial tcp 10.200.20.24:6443: connect: connection refused" interval="1.6s" May 17 00:50:04.934261 env[1585]: time="2025-05-17T00:50:04.934217651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.935126 kubelet[2244]: W0517 00:50:04.935031 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:04.935126 kubelet[2244]: E0517 00:50:04.935096 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:04.937966 env[1585]: time="2025-05-17T00:50:04.937924320Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.944662 env[1585]: time="2025-05-17T00:50:04.944632300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.947107 env[1585]: time="2025-05-17T00:50:04.947065373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.951123 env[1585]: time="2025-05-17T00:50:04.951090881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.953717 env[1585]: time="2025-05-17T00:50:04.953679714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.962347 env[1585]: time="2025-05-17T00:50:04.962302889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.966056 env[1585]: time="2025-05-17T00:50:04.966023398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.990540 env[1585]: time="2025-05-17T00:50:04.990465046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:05.020130 env[1585]: time="2025-05-17T00:50:05.020082364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:05.025311 env[1585]: time="2025-05-17T00:50:05.025234029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:05.025460 env[1585]: time="2025-05-17T00:50:05.025278149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:05.026876 env[1585]: time="2025-05-17T00:50:05.025451309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:05.026876 env[1585]: time="2025-05-17T00:50:05.025747508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0bbf85f91a34d31edec9bba49ad2ee3b23d8b873651aca36504b52592c083ea pid=2323 runtime=io.containerd.runc.v2 May 17 00:50:05.064956 env[1585]: time="2025-05-17T00:50:05.064132123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:05.067336 env[1585]: time="2025-05-17T00:50:05.067223035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:05.069442 env[1585]: time="2025-05-17T00:50:05.067303154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:05.069442 env[1585]: time="2025-05-17T00:50:05.067762033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5334be21c9e9ff61b745965af7c17867415dbb51f751684db0c30c529a8e54b pid=2360 runtime=io.containerd.runc.v2 May 17 00:50:05.079861 env[1585]: time="2025-05-17T00:50:05.079813040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-e6f3637a46,Uid:38334319291e09c61745efeca1a4902e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0bbf85f91a34d31edec9bba49ad2ee3b23d8b873651aca36504b52592c083ea\"" May 17 00:50:05.082603 env[1585]: time="2025-05-17T00:50:05.082564993Z" level=info msg="CreateContainer within sandbox \"c0bbf85f91a34d31edec9bba49ad2ee3b23d8b873651aca36504b52592c083ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:50:05.114588 env[1585]: time="2025-05-17T00:50:05.114506825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:05.114897 env[1585]: time="2025-05-17T00:50:05.114869184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:05.114992 env[1585]: time="2025-05-17T00:50:05.114972984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:05.115218 env[1585]: time="2025-05-17T00:50:05.115190463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14a75ca83336b143319184babdd4e24817c747d4a107e1649b4ca2a22f14e8d9 pid=2402 runtime=io.containerd.runc.v2 May 17 00:50:05.120801 env[1585]: time="2025-05-17T00:50:05.120751328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-e6f3637a46,Uid:21e160a5c31c8696f8b13d0b05d45b7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5334be21c9e9ff61b745965af7c17867415dbb51f751684db0c30c529a8e54b\"" May 17 00:50:05.124887 env[1585]: time="2025-05-17T00:50:05.124852037Z" level=info msg="CreateContainer within sandbox \"f5334be21c9e9ff61b745965af7c17867415dbb51f751684db0c30c529a8e54b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:50:05.148128 env[1585]: time="2025-05-17T00:50:05.148081413Z" level=info msg="CreateContainer within sandbox \"c0bbf85f91a34d31edec9bba49ad2ee3b23d8b873651aca36504b52592c083ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3d381c87f56d616da7b1ab9edf746668bb3a7d8d19783c711875dfb7b478b593\"" May 17 00:50:05.151416 env[1585]: time="2025-05-17T00:50:05.151365245Z" level=info msg="StartContainer for \"3d381c87f56d616da7b1ab9edf746668bb3a7d8d19783c711875dfb7b478b593\"" May 17 00:50:05.154234 kubelet[2244]: W0517 00:50:05.154132 2244 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.24:6443: connect: connection refused May 17 00:50:05.154234 kubelet[2244]: E0517 00:50:05.154206 2244 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.24:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:05.164784 env[1585]: time="2025-05-17T00:50:05.164733128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-e6f3637a46,Uid:d6f468545100581dbd6329df57d02068,Namespace:kube-system,Attempt:0,} returns sandbox id \"14a75ca83336b143319184babdd4e24817c747d4a107e1649b4ca2a22f14e8d9\"" May 17 00:50:05.167137 env[1585]: time="2025-05-17T00:50:05.167092122Z" level=info msg="CreateContainer within sandbox \"14a75ca83336b143319184babdd4e24817c747d4a107e1649b4ca2a22f14e8d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:50:05.195200 env[1585]: time="2025-05-17T00:50:05.195137125Z" level=info msg="CreateContainer within sandbox \"f5334be21c9e9ff61b745965af7c17867415dbb51f751684db0c30c529a8e54b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c269f727c946a0cd05a88dd1c4429435d199b3612c1b26a5f302c319a7fc41f3\"" May 17 00:50:05.195948 env[1585]: time="2025-05-17T00:50:05.195924203Z" level=info msg="StartContainer for \"c269f727c946a0cd05a88dd1c4429435d199b3612c1b26a5f302c319a7fc41f3\"" May 17 00:50:05.229411 env[1585]: time="2025-05-17T00:50:05.229270591Z" level=info msg="StartContainer for \"3d381c87f56d616da7b1ab9edf746668bb3a7d8d19783c711875dfb7b478b593\" returns successfully" May 17 00:50:05.234614 env[1585]: time="2025-05-17T00:50:05.234563697Z" level=info msg="CreateContainer within sandbox \"14a75ca83336b143319184babdd4e24817c747d4a107e1649b4ca2a22f14e8d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c67be6a96ae50f16ea6f85320ae84e6a30c933a3c48247d62a60fbfe25bf1fd\"" May 17 00:50:05.235228 env[1585]: time="2025-05-17T00:50:05.235204015Z" level=info msg="StartContainer for \"7c67be6a96ae50f16ea6f85320ae84e6a30c933a3c48247d62a60fbfe25bf1fd\"" May 17 00:50:05.265247 kubelet[2244]: I0517 00:50:05.264876 2244 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:05.265247 kubelet[2244]: E0517 00:50:05.265187 2244 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.24:6443/api/v1/nodes\": dial tcp 10.200.20.24:6443: connect: connection refused" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:05.304714 env[1585]: time="2025-05-17T00:50:05.304671345Z" level=info msg="StartContainer for \"c269f727c946a0cd05a88dd1c4429435d199b3612c1b26a5f302c319a7fc41f3\" returns successfully" May 17 00:50:05.337562 env[1585]: time="2025-05-17T00:50:05.337427656Z" level=info msg="StartContainer for \"7c67be6a96ae50f16ea6f85320ae84e6a30c933a3c48247d62a60fbfe25bf1fd\" returns successfully" May 17 00:50:06.867655 kubelet[2244]: I0517 00:50:06.867628 2244 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:07.465258 kubelet[2244]: I0517 00:50:07.465214 2244 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:07.469556 kubelet[2244]: I0517 00:50:07.469527 2244 apiserver.go:52] "Watching apiserver" May 17 00:50:07.514446 kubelet[2244]: I0517 00:50:07.514367 2244 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:50:09.852516 systemd[1]: Reloading. May 17 00:50:09.930774 /usr/lib/systemd/system-generators/torcx-generator[2580]: time="2025-05-17T00:50:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:50:09.931146 /usr/lib/systemd/system-generators/torcx-generator[2580]: time="2025-05-17T00:50:09Z" level=info msg="torcx already run" May 17 00:50:10.018838 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:50:10.018862 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:50:10.036410 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:50:10.128789 systemd[1]: Stopping kubelet.service... May 17 00:50:10.152927 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:50:10.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:10.153414 systemd[1]: Stopped kubelet.service. May 17 00:50:10.159593 kernel: kauditd_printk_skb: 46 callbacks suppressed May 17 00:50:10.159684 kernel: audit: type=1131 audit(1747443010.152:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:10.159381 systemd[1]: Starting kubelet.service... May 17 00:50:10.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:10.262795 systemd[1]: Started kubelet.service. May 17 00:50:10.281683 kernel: audit: type=1130 audit(1747443010.262:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:10.313680 kubelet[2655]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:10.314031 kubelet[2655]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:50:10.314077 kubelet[2655]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:10.314212 kubelet[2655]: I0517 00:50:10.314182 2655 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:50:10.319984 kubelet[2655]: I0517 00:50:10.319943 2655 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:50:10.319984 kubelet[2655]: I0517 00:50:10.319975 2655 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:50:10.320194 kubelet[2655]: I0517 00:50:10.320172 2655 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:50:10.321601 kubelet[2655]: I0517 00:50:10.321574 2655 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:50:10.345575 kubelet[2655]: I0517 00:50:10.345542 2655 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:50:10.355306 kubelet[2655]: E0517 00:50:10.355259 2655 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:50:10.644102 kernel: audit: type=1400 audit(1747443010.364:240): avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:10.644192 kernel: audit: type=1401 audit(1747443010.364:240): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:10.644213 kernel: audit: type=1300 audit(1747443010.364:240): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000856ed0 a1=400066ccd8 a2=4000856ea0 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:10.644230 kernel: audit: type=1327 audit(1747443010.364:240): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:10.644249 kernel: audit: type=1400 audit(1747443010.364:241): avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:10.644266 kernel: audit: type=1401 audit(1747443010.364:241): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:10.644284 kernel: audit: type=1300 audit(1747443010.364:241): arch=c00000b7 syscall=5 success=no exit=-22 a0=400052eda0 a1=400066ccf0 a2=4000856f60 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:10.644301 kernel: audit: type=1327 audit(1747443010.364:241): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:10.364000 audit[2655]: AVC avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:10.364000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:10.364000 audit[2655]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000856ed0 a1=400066ccd8 a2=4000856ea0 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:10.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:10.364000 audit[2655]: AVC avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:10.364000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:10.364000 audit[2655]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400052eda0 a1=400066ccf0 a2=4000856f60 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:10.364000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:10.645282 kubelet[2655]: I0517 00:50:10.355411 2655 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:50:10.645282 kubelet[2655]: I0517 00:50:10.358408 2655 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:50:10.645282 kubelet[2655]: I0517 00:50:10.359859 2655 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:50:10.645282 kubelet[2655]: I0517 00:50:10.359975 2655 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:50:10.645427 kubelet[2655]: I0517 00:50:10.360003 2655 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-e6f3637a46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:50:10.645427 kubelet[2655]: I0517 00:50:10.360229 2655 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:50:10.645427 kubelet[2655]: I0517 00:50:10.360237 2655 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:50:10.645427 kubelet[2655]: I0517 00:50:10.360289 2655 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:10.645427 kubelet[2655]: I0517 00:50:10.360421 2655 kubelet.go:408] "Attempting to sync node with API server" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.360436 2655 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.360456 2655 kubelet.go:314] "Adding apiserver pod source" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.360469 2655 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.361835 2655 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.362423 2655 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.362930 2655 server.go:1274] "Started kubelet" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.365096 2655 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.365142 2655 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.365166 2655 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.370308 2655 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.371164 2655 server.go:449] "Adding debug handlers to kubelet server" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.372116 2655 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.390817 2655 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:50:10.645601 kubelet[2655]: I0517 00:50:10.392105 2655 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:50:10.645601 kubelet[2655]: E0517 00:50:10.393220 2655 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-e6f3637a46\" not found" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.402549 2655 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.418853 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.432198 2655 factory.go:221] Registration of the systemd container factory successfully May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.432316 2655 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:50:10.645891 kubelet[2655]: E0517 00:50:10.452024 2655 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.453834 2655 factory.go:221] Registration of the containerd container factory successfully May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.479825 2655 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.479857 2655 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.479891 2655 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:50:10.645891 kubelet[2655]: E0517 00:50:10.479968 2655 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:50:10.645891 kubelet[2655]: E0517 00:50:10.580957 2655 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.620082 2655 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.620100 2655 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:50:10.645891 kubelet[2655]: I0517 00:50:10.620122 2655 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:10.646653 kubelet[2655]: I0517 00:50:10.646627 2655 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:50:10.646748 kubelet[2655]: I0517 00:50:10.646723 2655 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:50:10.646803 kubelet[2655]: I0517 00:50:10.646795 2655 policy_none.go:49] "None policy: Start" May 17 00:50:10.647696 kubelet[2655]: I0517 00:50:10.647680 2655 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:50:10.647789 kubelet[2655]: I0517 00:50:10.647779 2655 state_mem.go:35] "Initializing new in-memory state store" May 17 00:50:10.647998 kubelet[2655]: I0517 00:50:10.647982 2655 state_mem.go:75] "Updated machine memory state" May 17 00:50:10.649162 kubelet[2655]: I0517 00:50:10.649135 2655 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:50:10.649277 kubelet[2655]: I0517 00:50:10.649260 2655 reconciler.go:26] "Reconciler: start to sync state" May 17 00:50:10.649851 kubelet[2655]: I0517 00:50:10.649834 2655 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:50:10.649000 audit[2655]: AVC avc: denied { mac_admin } for pid=2655 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:10.649000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 00:50:10.649000 audit[2655]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e13950 a1=4000c674b8 a2=4000e13920 a3=25 items=0 ppid=1 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:10.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 00:50:10.651186 kubelet[2655]: I0517 00:50:10.651118 2655 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 00:50:10.651295 kubelet[2655]: I0517 00:50:10.651272 2655 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:50:10.651342 kubelet[2655]: I0517 00:50:10.651290 2655 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:50:10.651611 kubelet[2655]: I0517 00:50:10.651589 2655 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:50:10.761605 kubelet[2655]: I0517 00:50:10.761555 2655 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.774353 kubelet[2655]: I0517 00:50:10.774325 2655 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.774597 kubelet[2655]: I0517 00:50:10.774579 2655 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.795689 kubelet[2655]: W0517 00:50:10.795660 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:10.798731 kubelet[2655]: W0517 00:50:10.798708 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:10.799727 kubelet[2655]: W0517 00:50:10.799706 2655 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:10.850383 kubelet[2655]: I0517 00:50:10.850340 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.850595 kubelet[2655]: I0517 00:50:10.850579 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38334319291e09c61745efeca1a4902e-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-e6f3637a46\" (UID: \"38334319291e09c61745efeca1a4902e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.850690 kubelet[2655]: I0517 00:50:10.850677 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38334319291e09c61745efeca1a4902e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-e6f3637a46\" (UID: \"38334319291e09c61745efeca1a4902e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.850783 kubelet[2655]: I0517 00:50:10.850769 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.850862 kubelet[2655]: I0517 00:50:10.850850 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.850942 kubelet[2655]: I0517 00:50:10.850930 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.851017 kubelet[2655]: I0517 00:50:10.851006 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38334319291e09c61745efeca1a4902e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-e6f3637a46\" (UID: \"38334319291e09c61745efeca1a4902e\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.851110 kubelet[2655]: I0517 00:50:10.851098 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21e160a5c31c8696f8b13d0b05d45b7e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-e6f3637a46\" (UID: \"21e160a5c31c8696f8b13d0b05d45b7e\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" May 17 00:50:10.851198 kubelet[2655]: I0517 00:50:10.851186 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6f468545100581dbd6329df57d02068-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-e6f3637a46\" (UID: \"d6f468545100581dbd6329df57d02068\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-e6f3637a46" May 17 00:50:11.361135 kubelet[2655]: I0517 00:50:11.361097 2655 apiserver.go:52] "Watching apiserver" May 17 00:50:11.403430 kubelet[2655]: I0517 00:50:11.403388 2655 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:50:11.499988 kubelet[2655]: I0517 00:50:11.499920 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-e6f3637a46" podStartSLOduration=1.499905281 podStartE2EDuration="1.499905281s" podCreationTimestamp="2025-05-17 00:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:11.499789602 +0000 UTC m=+1.230978345" watchObservedRunningTime="2025-05-17 00:50:11.499905281 +0000 UTC m=+1.231094024" May 17 00:50:11.526807 kubelet[2655]: I0517 00:50:11.526737 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-e6f3637a46" podStartSLOduration=1.526721792 podStartE2EDuration="1.526721792s" podCreationTimestamp="2025-05-17 00:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:11.513867256 +0000 UTC m=+1.245055999" watchObservedRunningTime="2025-05-17 00:50:11.526721792 +0000 UTC m=+1.257910535" May 17 00:50:11.539147 kubelet[2655]: I0517 00:50:11.539092 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-e6f3637a46" podStartSLOduration=1.5390748090000002 podStartE2EDuration="1.539074809s" podCreationTimestamp="2025-05-17 00:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:11.526697672 +0000 UTC m=+1.257886415" watchObservedRunningTime="2025-05-17 00:50:11.539074809 +0000 UTC m=+1.270263552" May 17 00:50:15.202959 kubelet[2655]: I0517 00:50:15.202923 2655 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:50:15.203541 env[1585]: time="2025-05-17T00:50:15.203502959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:50:15.204070 kubelet[2655]: I0517 00:50:15.204045 2655 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:50:15.876209 kubelet[2655]: I0517 00:50:15.876170 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210-xtables-lock\") pod \"kube-proxy-wrsj8\" (UID: \"fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210\") " pod="kube-system/kube-proxy-wrsj8" May 17 00:50:15.876396 kubelet[2655]: I0517 00:50:15.876380 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210-lib-modules\") pod \"kube-proxy-wrsj8\" (UID: \"fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210\") " pod="kube-system/kube-proxy-wrsj8" May 17 00:50:15.876537 kubelet[2655]: I0517 00:50:15.876523 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210-kube-proxy\") pod \"kube-proxy-wrsj8\" (UID: \"fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210\") " pod="kube-system/kube-proxy-wrsj8" May 17 00:50:15.876640 kubelet[2655]: I0517 00:50:15.876624 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8tc7\" (UniqueName: \"kubernetes.io/projected/fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210-kube-api-access-s8tc7\") pod \"kube-proxy-wrsj8\" (UID: \"fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210\") " pod="kube-system/kube-proxy-wrsj8" May 17 00:50:15.985433 kubelet[2655]: I0517 00:50:15.985383 2655 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:50:16.132914 env[1585]: time="2025-05-17T00:50:16.132106519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrsj8,Uid:fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210,Namespace:kube-system,Attempt:0,}" May 17 00:50:16.170446 env[1585]: time="2025-05-17T00:50:16.170322388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:16.170446 env[1585]: time="2025-05-17T00:50:16.170361827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:16.170446 env[1585]: time="2025-05-17T00:50:16.170371867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:16.178686 env[1585]: time="2025-05-17T00:50:16.178595856Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1293a28ce6417dc721e6b03041011d24f68e712d38066dc53ef23fb76d11ca6 pid=2719 runtime=io.containerd.runc.v2 May 17 00:50:16.220547 env[1585]: time="2025-05-17T00:50:16.220462920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrsj8,Uid:fa6f7b3d-ddf8-41ba-887d-ef79eb0e4210,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1293a28ce6417dc721e6b03041011d24f68e712d38066dc53ef23fb76d11ca6\"" May 17 00:50:16.223702 env[1585]: time="2025-05-17T00:50:16.223651996Z" level=info msg="CreateContainer within sandbox \"d1293a28ce6417dc721e6b03041011d24f68e712d38066dc53ef23fb76d11ca6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:50:16.284718 env[1585]: time="2025-05-17T00:50:16.284654474Z" level=info msg="CreateContainer within sandbox \"d1293a28ce6417dc721e6b03041011d24f68e712d38066dc53ef23fb76d11ca6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f95e87358317269cd30754687909e5cb664177a2ac1dfda5d04a8d8c68b13743\"" May 17 00:50:16.285447 env[1585]: time="2025-05-17T00:50:16.285407713Z" level=info msg="StartContainer for \"f95e87358317269cd30754687909e5cb664177a2ac1dfda5d04a8d8c68b13743\"" May 17 00:50:16.304505 kubelet[2655]: W0517 00:50:16.304404 2655 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.7-n-e6f3637a46" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3510.3.7-n-e6f3637a46' and this object May 17 00:50:16.304505 kubelet[2655]: E0517 00:50:16.304451 2655 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.7-n-e6f3637a46\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-3510.3.7-n-e6f3637a46' and this object" logger="UnhandledError" May 17 00:50:16.361329 env[1585]: time="2025-05-17T00:50:16.361285211Z" level=info msg="StartContainer for \"f95e87358317269cd30754687909e5cb664177a2ac1dfda5d04a8d8c68b13743\" returns successfully" May 17 00:50:16.380203 kubelet[2655]: I0517 00:50:16.380146 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bdc5a0bc-a124-4eb9-9647-f7d426d3afd2-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-8kt99\" (UID: \"bdc5a0bc-a124-4eb9-9647-f7d426d3afd2\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8kt99" May 17 00:50:16.380350 kubelet[2655]: I0517 00:50:16.380208 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhtxs\" (UniqueName: \"kubernetes.io/projected/bdc5a0bc-a124-4eb9-9647-f7d426d3afd2-kube-api-access-xhtxs\") pod \"tigera-operator-7c5755cdcb-8kt99\" (UID: \"bdc5a0bc-a124-4eb9-9647-f7d426d3afd2\") " pod="tigera-operator/tigera-operator-7c5755cdcb-8kt99" May 17 00:50:16.481000 audit[2818]: NETFILTER_CFG table=mangle:45 family=2 entries=1 op=nft_register_chain pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.487602 kernel: kauditd_printk_skb: 4 callbacks suppressed May 17 00:50:16.487697 kernel: audit: type=1325 audit(1747443016.481:243): table=mangle:45 family=2 entries=1 op=nft_register_chain pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.481000 audit[2818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffeca7070 a2=0 a3=1 items=0 ppid=2769 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.527798 kernel: audit: type=1300 audit(1747443016.481:243): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffeca7070 a2=0 a3=1 items=0 ppid=2769 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:50:16.541423 kernel: audit: type=1327 audit(1747443016.481:243): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:50:16.482000 audit[2817]: NETFILTER_CFG table=mangle:46 family=10 entries=1 op=nft_register_chain pid=2817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.555531 kernel: audit: type=1325 audit(1747443016.482:244): table=mangle:46 family=10 entries=1 op=nft_register_chain pid=2817 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.482000 audit[2817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc961cda0 a2=0 a3=1 items=0 ppid=2769 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.581432 kernel: audit: type=1300 audit(1747443016.482:244): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc961cda0 a2=0 a3=1 items=0 ppid=2769 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:50:16.596653 kernel: audit: type=1327 audit(1747443016.482:244): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 00:50:16.483000 audit[2820]: NETFILTER_CFG table=nat:47 family=2 entries=1 op=nft_register_chain pid=2820 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.611317 kernel: audit: type=1325 audit(1747443016.483:245): table=nat:47 family=2 entries=1 op=nft_register_chain pid=2820 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.483000 audit[2820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd9f6b70 a2=0 a3=1 items=0 ppid=2769 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.638255 kernel: audit: type=1300 audit(1747443016.483:245): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd9f6b70 a2=0 a3=1 items=0 ppid=2769 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:50:16.656978 kernel: audit: type=1327 audit(1747443016.483:245): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:50:16.484000 audit[2821]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2821 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.658865 kubelet[2655]: I0517 00:50:16.658815 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wrsj8" podStartSLOduration=1.658796465 podStartE2EDuration="1.658796465s" podCreationTimestamp="2025-05-17 00:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:16.658185755 +0000 UTC m=+6.389374498" watchObservedRunningTime="2025-05-17 00:50:16.658796465 +0000 UTC m=+6.389985208" May 17 00:50:16.671311 kernel: audit: type=1325 audit(1747443016.484:246): table=filter:48 family=2 entries=1 op=nft_register_chain pid=2821 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.484000 audit[2821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb577e30 a2=0 a3=1 items=0 ppid=2769 pid=2821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:50:16.493000 audit[2819]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.493000 audit[2819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc708dd80 a2=0 a3=1 items=0 ppid=2769 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 00:50:16.501000 audit[2822]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_chain pid=2822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.501000 audit[2822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5699bd0 a2=0 a3=1 items=0 ppid=2769 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 00:50:16.582000 audit[2823]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.582000 audit[2823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd7bdf490 a2=0 a3=1 items=0 ppid=2769 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:50:16.597000 audit[2825]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.597000 audit[2825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffea879ed0 a2=0 a3=1 items=0 ppid=2769 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 00:50:16.603000 audit[2829]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.603000 audit[2829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd4717410 a2=0 a3=1 items=0 ppid=2769 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 00:50:16.603000 audit[2830]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2830 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.603000 audit[2830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffedc9420 a2=0 a3=1 items=0 ppid=2769 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:50:16.603000 audit[2832]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2832 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.603000 audit[2832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd06f05a0 a2=0 a3=1 items=0 ppid=2769 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:50:16.608000 audit[2833]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=2833 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.608000 audit[2833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffdc7ee0 a2=0 a3=1 items=0 ppid=2769 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:50:16.612000 audit[2835]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2835 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.612000 audit[2835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff7cefa00 a2=0 a3=1 items=0 ppid=2769 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:50:16.642000 audit[2838]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2838 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.642000 audit[2838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6487c40 a2=0 a3=1 items=0 ppid=2769 pid=2838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.642000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 00:50:16.643000 audit[2839]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.643000 audit[2839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9391000 a2=0 a3=1 items=0 ppid=2769 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:50:16.643000 audit[2841]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.643000 audit[2841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef5e7710 a2=0 a3=1 items=0 ppid=2769 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:50:16.643000 audit[2842]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=2842 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.643000 audit[2842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffde42b0f0 a2=0 a3=1 items=0 ppid=2769 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:50:16.649000 audit[2844]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.649000 audit[2844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3d04190 a2=0 a3=1 items=0 ppid=2769 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:50:16.654000 audit[2847]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_rule pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.654000 audit[2847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc68502f0 a2=0 a3=1 items=0 ppid=2769 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:50:16.663000 audit[2850]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2850 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.663000 audit[2850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffca83a8f0 a2=0 a3=1 items=0 ppid=2769 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:50:16.676000 audit[2851]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_chain pid=2851 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.676000 audit[2851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffc245fe0 a2=0 a3=1 items=0 ppid=2769 pid=2851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:50:16.679000 audit[2853]: NETFILTER_CFG table=nat:66 family=2 entries=1 op=nft_register_rule pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.679000 audit[2853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcecc6340 a2=0 a3=1 items=0 ppid=2769 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:50:16.682000 audit[2856]: NETFILTER_CFG table=nat:67 family=2 entries=1 op=nft_register_rule pid=2856 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.682000 audit[2856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff0dbc7d0 a2=0 a3=1 items=0 ppid=2769 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:50:16.684000 audit[2857]: NETFILTER_CFG table=nat:68 family=2 entries=1 op=nft_register_chain pid=2857 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.684000 audit[2857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6bdda70 a2=0 a3=1 items=0 ppid=2769 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:50:16.686000 audit[2859]: NETFILTER_CFG table=nat:69 family=2 entries=1 op=nft_register_rule pid=2859 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 00:50:16.686000 audit[2859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe9372a50 a2=0 a3=1 items=0 ppid=2769 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:50:16.775000 audit[2865]: NETFILTER_CFG table=filter:70 family=2 entries=8 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:16.775000 audit[2865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcd8e0490 a2=0 a3=1 items=0 ppid=2769 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:16.801000 audit[2865]: NETFILTER_CFG table=nat:71 family=2 entries=14 op=nft_register_chain pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:16.801000 audit[2865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcd8e0490 a2=0 a3=1 items=0 ppid=2769 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:16.802000 audit[2870]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.802000 audit[2870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe1d74f00 a2=0 a3=1 items=0 ppid=2769 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 00:50:16.804000 audit[2872]: NETFILTER_CFG table=filter:73 family=10 entries=2 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.804000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc6367f60 a2=0 a3=1 items=0 ppid=2769 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 00:50:16.808000 audit[2875]: NETFILTER_CFG table=filter:74 family=10 entries=2 op=nft_register_chain pid=2875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.808000 audit[2875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc8259f70 a2=0 a3=1 items=0 ppid=2769 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 00:50:16.809000 audit[2876]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.809000 audit[2876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3abe580 a2=0 a3=1 items=0 ppid=2769 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 00:50:16.811000 audit[2878]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2878 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.811000 audit[2878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7c96860 a2=0 a3=1 items=0 ppid=2769 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 00:50:16.812000 audit[2879]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_chain pid=2879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.812000 audit[2879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc22dd9f0 a2=0 a3=1 items=0 ppid=2769 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 00:50:16.814000 audit[2881]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2881 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.814000 audit[2881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd05f8640 a2=0 a3=1 items=0 ppid=2769 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 00:50:16.818000 audit[2884]: NETFILTER_CFG table=filter:79 family=10 entries=2 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.818000 audit[2884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff92c1d90 a2=0 a3=1 items=0 ppid=2769 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 00:50:16.819000 audit[2885]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.819000 audit[2885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff752aea0 a2=0 a3=1 items=0 ppid=2769 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 00:50:16.821000 audit[2887]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=2887 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.821000 audit[2887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7d48ff0 a2=0 a3=1 items=0 ppid=2769 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 00:50:16.822000 audit[2888]: NETFILTER_CFG table=filter:82 family=10 entries=1 op=nft_register_chain pid=2888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.822000 audit[2888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc3cbdaa0 a2=0 a3=1 items=0 ppid=2769 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 00:50:16.825000 audit[2890]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.825000 audit[2890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd42f1e60 a2=0 a3=1 items=0 ppid=2769 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 00:50:16.828000 audit[2893]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_rule pid=2893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.828000 audit[2893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdbc0b290 a2=0 a3=1 items=0 ppid=2769 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 00:50:16.831000 audit[2896]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.831000 audit[2896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5cb3840 a2=0 a3=1 items=0 ppid=2769 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 00:50:16.833000 audit[2897]: NETFILTER_CFG table=nat:86 family=10 entries=1 op=nft_register_chain pid=2897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.833000 audit[2897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe817700 a2=0 a3=1 items=0 ppid=2769 pid=2897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 00:50:16.835000 audit[2899]: NETFILTER_CFG table=nat:87 family=10 entries=2 op=nft_register_chain pid=2899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.835000 audit[2899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe4ae3410 a2=0 a3=1 items=0 ppid=2769 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:50:16.838000 audit[2902]: NETFILTER_CFG table=nat:88 family=10 entries=2 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.838000 audit[2902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe2696e70 a2=0 a3=1 items=0 ppid=2769 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 00:50:16.839000 audit[2903]: NETFILTER_CFG table=nat:89 family=10 entries=1 op=nft_register_chain pid=2903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.839000 audit[2903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4caf280 a2=0 a3=1 items=0 ppid=2769 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 00:50:16.841000 audit[2905]: NETFILTER_CFG table=nat:90 family=10 entries=2 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.841000 audit[2905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff75b0430 a2=0 a3=1 items=0 ppid=2769 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 00:50:16.842000 audit[2906]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.842000 audit[2906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe22f6010 a2=0 a3=1 items=0 ppid=2769 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 00:50:16.844000 audit[2908]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.844000 audit[2908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffde03af50 a2=0 a3=1 items=0 ppid=2769 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:50:16.847000 audit[2911]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 00:50:16.847000 audit[2911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe5b6ac70 a2=0 a3=1 items=0 ppid=2769 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 00:50:16.850000 audit[2913]: NETFILTER_CFG table=filter:94 family=10 entries=3 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:50:16.850000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff7543960 a2=0 a3=1 items=0 ppid=2769 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.850000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:16.850000 audit[2913]: NETFILTER_CFG table=nat:95 family=10 entries=7 op=nft_register_chain pid=2913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 00:50:16.850000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff7543960 a2=0 a3=1 items=0 ppid=2769 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:16.850000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:16.990611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255513804.mount: Deactivated successfully. May 17 00:50:17.803019 env[1585]: time="2025-05-17T00:50:17.802444582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8kt99,Uid:bdc5a0bc-a124-4eb9-9647-f7d426d3afd2,Namespace:tigera-operator,Attempt:0,}" May 17 00:50:17.845770 env[1585]: time="2025-05-17T00:50:17.845602209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:17.845770 env[1585]: time="2025-05-17T00:50:17.845640564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:17.845770 env[1585]: time="2025-05-17T00:50:17.845651403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:17.846027 env[1585]: time="2025-05-17T00:50:17.845969601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84d8fe91326ae958f8890084182836efc7b4df4c4078e404cf3b789a000ab1a2 pid=2922 runtime=io.containerd.runc.v2 May 17 00:50:17.892477 env[1585]: time="2025-05-17T00:50:17.892438112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-8kt99,Uid:bdc5a0bc-a124-4eb9-9647-f7d426d3afd2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"84d8fe91326ae958f8890084182836efc7b4df4c4078e404cf3b789a000ab1a2\"" May 17 00:50:17.894082 env[1585]: time="2025-05-17T00:50:17.894057738Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:50:20.000303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311105157.mount: Deactivated successfully. May 17 00:50:21.172285 env[1585]: time="2025-05-17T00:50:21.172235201Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:21.179803 env[1585]: time="2025-05-17T00:50:21.179742595Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:21.184458 env[1585]: time="2025-05-17T00:50:21.184258343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:21.193350 env[1585]: time="2025-05-17T00:50:21.193300837Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:21.194179 env[1585]: time="2025-05-17T00:50:21.194145657Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 00:50:21.198111 env[1585]: time="2025-05-17T00:50:21.198076434Z" level=info msg="CreateContainer within sandbox \"84d8fe91326ae958f8890084182836efc7b4df4c4078e404cf3b789a000ab1a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:50:21.221441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198063075.mount: Deactivated successfully. May 17 00:50:21.227364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686940015.mount: Deactivated successfully. May 17 00:50:21.243751 env[1585]: time="2025-05-17T00:50:21.243703334Z" level=info msg="CreateContainer within sandbox \"84d8fe91326ae958f8890084182836efc7b4df4c4078e404cf3b789a000ab1a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ad5427d6f3cbb7f4af6662c0dbd6a9ec1792b22edad8bc7efeee87210e5a427d\"" May 17 00:50:21.245704 env[1585]: time="2025-05-17T00:50:21.244639623Z" level=info msg="StartContainer for \"ad5427d6f3cbb7f4af6662c0dbd6a9ec1792b22edad8bc7efeee87210e5a427d\"" May 17 00:50:21.292408 env[1585]: time="2025-05-17T00:50:21.292357676Z" level=info msg="StartContainer for \"ad5427d6f3cbb7f4af6662c0dbd6a9ec1792b22edad8bc7efeee87210e5a427d\" returns successfully" May 17 00:50:21.621068 kubelet[2655]: I0517 00:50:21.621002 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-8kt99" podStartSLOduration=2.319312879 podStartE2EDuration="5.620983407s" podCreationTimestamp="2025-05-17 00:50:16 +0000 UTC" firstStartedPulling="2025-05-17 00:50:17.893682507 +0000 UTC m=+7.624871250" lastFinishedPulling="2025-05-17 00:50:21.195353035 +0000 UTC m=+10.926541778" observedRunningTime="2025-05-17 00:50:21.620881219 +0000 UTC m=+11.352069962" watchObservedRunningTime="2025-05-17 00:50:21.620983407 +0000 UTC m=+11.352172150" May 17 00:50:27.339953 sudo[1946]: pam_unix(sudo:session): session closed for user root May 17 00:50:27.339000 audit[1946]: USER_END pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:50:27.345380 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 00:50:27.345512 kernel: audit: type=1106 audit(1747443027.339:294): pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:50:27.344000 audit[1946]: CRED_DISP pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:50:27.386777 kernel: audit: type=1104 audit(1747443027.344:295): pid=1946 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 00:50:27.443655 sshd[1942]: pam_unix(sshd:session): session closed for user core May 17 00:50:27.444000 audit[1942]: USER_END pid=1942 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:50:27.471523 systemd[1]: sshd@6-10.200.20.24:22-10.200.16.10:49156.service: Deactivated successfully. May 17 00:50:27.472236 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:50:27.473267 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. May 17 00:50:27.474445 systemd-logind[1574]: Removed session 9. May 17 00:50:27.444000 audit[1942]: CRED_DISP pid=1942 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:50:27.525245 kernel: audit: type=1106 audit(1747443027.444:296): pid=1942 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:50:27.525345 kernel: audit: type=1104 audit(1747443027.444:297): pid=1942 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:50:27.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.24:22-10.200.16.10:49156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:27.547280 kernel: audit: type=1131 audit(1747443027.471:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.24:22-10.200.16.10:49156 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:30.020000 audit[3039]: NETFILTER_CFG table=filter:96 family=2 entries=15 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:30.020000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd90094b0 a2=0 a3=1 items=0 ppid=2769 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:30.081315 kernel: audit: type=1325 audit(1747443030.020:299): table=filter:96 family=2 entries=15 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:30.081427 kernel: audit: type=1300 audit(1747443030.020:299): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd90094b0 a2=0 a3=1 items=0 ppid=2769 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:30.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:30.096868 kernel: audit: type=1327 audit(1747443030.020:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:30.040000 audit[3039]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:30.116950 kernel: audit: type=1325 audit(1747443030.040:300): table=nat:97 family=2 entries=12 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:30.040000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd90094b0 a2=0 a3=1 items=0 ppid=2769 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:30.147395 kernel: audit: type=1300 audit(1747443030.040:300): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd90094b0 a2=0 a3=1 items=0 ppid=2769 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:30.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:30.100000 audit[3041]: NETFILTER_CFG table=filter:98 family=2 entries=16 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:30.100000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffe02c4f0 a2=0 a3=1 items=0 ppid=2769 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:30.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:30.118000 audit[3041]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:30.118000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe02c4f0 a2=0 a3=1 items=0 ppid=2769 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:30.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:31.798000 audit[3044]: NETFILTER_CFG table=filter:100 family=2 entries=17 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:31.798000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffea7dc3b0 a2=0 a3=1 items=0 ppid=2769 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:31.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:31.812000 audit[3044]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:31.812000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea7dc3b0 a2=0 a3=1 items=0 ppid=2769 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:31.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:31.993354 kubelet[2655]: I0517 00:50:31.993315 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b-typha-certs\") pod \"calico-typha-7499c5b985-xlpmp\" (UID: \"56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b\") " pod="calico-system/calico-typha-7499c5b985-xlpmp" May 17 00:50:31.993808 kubelet[2655]: I0517 00:50:31.993789 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b-tigera-ca-bundle\") pod \"calico-typha-7499c5b985-xlpmp\" (UID: \"56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b\") " pod="calico-system/calico-typha-7499c5b985-xlpmp" May 17 00:50:31.993939 kubelet[2655]: I0517 00:50:31.993925 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrm4b\" (UniqueName: \"kubernetes.io/projected/56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b-kube-api-access-vrm4b\") pod \"calico-typha-7499c5b985-xlpmp\" (UID: \"56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b\") " pod="calico-system/calico-typha-7499c5b985-xlpmp" May 17 00:50:32.042000 audit[3046]: NETFILTER_CFG table=filter:102 family=2 entries=19 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:32.042000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcf59b4d0 a2=0 a3=1 items=0 ppid=2769 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:32.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:32.050000 audit[3046]: NETFILTER_CFG table=nat:103 family=2 entries=12 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:32.050000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf59b4d0 a2=0 a3=1 items=0 ppid=2769 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:32.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:32.195079 kubelet[2655]: I0517 00:50:32.195042 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-policysync\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.195328 kubelet[2655]: I0517 00:50:32.195294 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-var-lib-calico\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.195450 kubelet[2655]: I0517 00:50:32.195424 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d67ded12-d32e-496f-b2c5-20f26c27069b-tigera-ca-bundle\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.195569 kubelet[2655]: I0517 00:50:32.195556 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-var-run-calico\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.195692 kubelet[2655]: I0517 00:50:32.195673 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d67ded12-d32e-496f-b2c5-20f26c27069b-node-certs\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.195812 kubelet[2655]: I0517 00:50:32.195799 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qtj5\" (UniqueName: \"kubernetes.io/projected/d67ded12-d32e-496f-b2c5-20f26c27069b-kube-api-access-9qtj5\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.195924 kubelet[2655]: I0517 00:50:32.195911 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-flexvol-driver-host\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.196030 kubelet[2655]: I0517 00:50:32.196018 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-cni-net-dir\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.196130 kubelet[2655]: I0517 00:50:32.196118 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-lib-modules\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.196310 kubelet[2655]: I0517 00:50:32.196279 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-xtables-lock\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.196429 kubelet[2655]: I0517 00:50:32.196416 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-cni-bin-dir\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.196566 kubelet[2655]: I0517 00:50:32.196545 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d67ded12-d32e-496f-b2c5-20f26c27069b-cni-log-dir\") pod \"calico-node-m4zf5\" (UID: \"d67ded12-d32e-496f-b2c5-20f26c27069b\") " pod="calico-system/calico-node-m4zf5" May 17 00:50:32.222547 env[1585]: time="2025-05-17T00:50:32.222464256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7499c5b985-xlpmp,Uid:56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b,Namespace:calico-system,Attempt:0,}" May 17 00:50:32.265564 env[1585]: time="2025-05-17T00:50:32.265167200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:32.265564 env[1585]: time="2025-05-17T00:50:32.265207917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:32.265564 env[1585]: time="2025-05-17T00:50:32.265218276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:32.265564 env[1585]: time="2025-05-17T00:50:32.265334865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2441bdde4362b9c79db423876c440a483de0d06f43fb8ab3c77b3b878204df0 pid=3055 runtime=io.containerd.runc.v2 May 17 00:50:32.280714 kubelet[2655]: E0517 00:50:32.280667 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:32.308531 kubelet[2655]: E0517 00:50:32.304855 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.309092 kubelet[2655]: W0517 00:50:32.309061 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.311605 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.314281 kubelet[2655]: W0517 00:50:32.311629 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.311650 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.311791 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.314281 kubelet[2655]: W0517 00:50:32.311799 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.311807 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.311916 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.314281 kubelet[2655]: W0517 00:50:32.311923 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.311930 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.314281 kubelet[2655]: E0517 00:50:32.312082 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.314669 kubelet[2655]: W0517 00:50:32.312089 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.314669 kubelet[2655]: E0517 00:50:32.312097 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.315822 kubelet[2655]: E0517 00:50:32.315741 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.316811 kubelet[2655]: E0517 00:50:32.316701 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.316811 kubelet[2655]: W0517 00:50:32.316723 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.316811 kubelet[2655]: E0517 00:50:32.316759 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.316981 kubelet[2655]: E0517 00:50:32.316959 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.317023 kubelet[2655]: W0517 00:50:32.316984 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.317023 kubelet[2655]: E0517 00:50:32.317001 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.317624 kubelet[2655]: E0517 00:50:32.317226 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.317624 kubelet[2655]: W0517 00:50:32.317243 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.317624 kubelet[2655]: E0517 00:50:32.317258 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.317624 kubelet[2655]: E0517 00:50:32.317584 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.317624 kubelet[2655]: W0517 00:50:32.317597 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.317624 kubelet[2655]: E0517 00:50:32.317608 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.330284 kubelet[2655]: E0517 00:50:32.330001 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.330284 kubelet[2655]: W0517 00:50:32.330044 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.330284 kubelet[2655]: E0517 00:50:32.330067 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.371872 kubelet[2655]: E0517 00:50:32.371834 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.371872 kubelet[2655]: W0517 00:50:32.371871 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.372050 kubelet[2655]: E0517 00:50:32.371895 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.372653 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375525 kubelet[2655]: W0517 00:50:32.372671 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.372683 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.372822 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375525 kubelet[2655]: W0517 00:50:32.372830 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.372838 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.372954 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375525 kubelet[2655]: W0517 00:50:32.372961 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.372969 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375525 kubelet[2655]: E0517 00:50:32.373099 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375858 kubelet[2655]: W0517 00:50:32.373106 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.375858 kubelet[2655]: E0517 00:50:32.373113 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375858 kubelet[2655]: E0517 00:50:32.373220 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375858 kubelet[2655]: W0517 00:50:32.373226 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.375858 kubelet[2655]: E0517 00:50:32.373232 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375858 kubelet[2655]: E0517 00:50:32.373337 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375858 kubelet[2655]: W0517 00:50:32.373343 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.375858 kubelet[2655]: E0517 00:50:32.373350 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.375858 kubelet[2655]: E0517 00:50:32.373456 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.375858 kubelet[2655]: W0517 00:50:32.373462 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373469 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373622 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376078 kubelet[2655]: W0517 00:50:32.373629 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373638 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373748 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376078 kubelet[2655]: W0517 00:50:32.373754 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373762 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373866 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376078 kubelet[2655]: W0517 00:50:32.373872 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376078 kubelet[2655]: E0517 00:50:32.373879 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.373984 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376302 kubelet[2655]: W0517 00:50:32.373990 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.373997 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.374111 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376302 kubelet[2655]: W0517 00:50:32.374119 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.374126 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.374261 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376302 kubelet[2655]: W0517 00:50:32.374269 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.374277 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376302 kubelet[2655]: E0517 00:50:32.374392 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376588 kubelet[2655]: W0517 00:50:32.374399 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376588 kubelet[2655]: E0517 00:50:32.374406 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376588 kubelet[2655]: E0517 00:50:32.374568 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376588 kubelet[2655]: W0517 00:50:32.374576 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376588 kubelet[2655]: E0517 00:50:32.374584 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.376588 kubelet[2655]: E0517 00:50:32.374728 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.376588 kubelet[2655]: W0517 00:50:32.374735 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.376588 kubelet[2655]: E0517 00:50:32.374742 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.381850 kubelet[2655]: E0517 00:50:32.380544 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.381850 kubelet[2655]: W0517 00:50:32.380578 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.381850 kubelet[2655]: E0517 00:50:32.380594 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.381850 kubelet[2655]: E0517 00:50:32.380784 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.381850 kubelet[2655]: W0517 00:50:32.380792 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.381850 kubelet[2655]: E0517 00:50:32.380802 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.381850 kubelet[2655]: E0517 00:50:32.380923 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.381850 kubelet[2655]: W0517 00:50:32.380930 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.381850 kubelet[2655]: E0517 00:50:32.380938 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.391323 env[1585]: time="2025-05-17T00:50:32.391276729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7499c5b985-xlpmp,Uid:56ecda25-3d1e-4cdb-9fa1-451ce7b56f3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2441bdde4362b9c79db423876c440a483de0d06f43fb8ab3c77b3b878204df0\"" May 17 00:50:32.393251 env[1585]: time="2025-05-17T00:50:32.393221119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:50:32.398535 kubelet[2655]: E0517 00:50:32.398512 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.398671 kubelet[2655]: W0517 00:50:32.398657 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.398800 kubelet[2655]: E0517 00:50:32.398787 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.398894 kubelet[2655]: I0517 00:50:32.398878 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1d05e6a1-f683-492a-9425-3efff5a40b11-varrun\") pod \"csi-node-driver-8ksxs\" (UID: \"1d05e6a1-f683-492a-9425-3efff5a40b11\") " pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:32.399184 kubelet[2655]: E0517 00:50:32.399165 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.399273 kubelet[2655]: W0517 00:50:32.399261 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.399356 kubelet[2655]: E0517 00:50:32.399344 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.399714 kubelet[2655]: E0517 00:50:32.399699 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.399820 kubelet[2655]: W0517 00:50:32.399805 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.399899 kubelet[2655]: E0517 00:50:32.399887 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.400041 kubelet[2655]: I0517 00:50:32.400026 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1d05e6a1-f683-492a-9425-3efff5a40b11-socket-dir\") pod \"csi-node-driver-8ksxs\" (UID: \"1d05e6a1-f683-492a-9425-3efff5a40b11\") " pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:32.400247 kubelet[2655]: E0517 00:50:32.400234 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.400338 kubelet[2655]: W0517 00:50:32.400326 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.400409 kubelet[2655]: E0517 00:50:32.400388 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.401329 kubelet[2655]: E0517 00:50:32.401313 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.401451 kubelet[2655]: W0517 00:50:32.401438 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.401545 kubelet[2655]: E0517 00:50:32.401534 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.401805 kubelet[2655]: E0517 00:50:32.401791 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.401911 kubelet[2655]: W0517 00:50:32.401898 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.401985 kubelet[2655]: E0517 00:50:32.401975 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.402184 kubelet[2655]: I0517 00:50:32.402171 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d05e6a1-f683-492a-9425-3efff5a40b11-kubelet-dir\") pod \"csi-node-driver-8ksxs\" (UID: \"1d05e6a1-f683-492a-9425-3efff5a40b11\") " pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:32.402390 kubelet[2655]: E0517 00:50:32.402378 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.402479 kubelet[2655]: W0517 00:50:32.402466 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.402576 kubelet[2655]: E0517 00:50:32.402565 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.402862 kubelet[2655]: E0517 00:50:32.402849 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.402993 kubelet[2655]: W0517 00:50:32.402979 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.403071 kubelet[2655]: E0517 00:50:32.403060 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.403689 kubelet[2655]: E0517 00:50:32.403672 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.403818 kubelet[2655]: W0517 00:50:32.403804 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.403893 kubelet[2655]: E0517 00:50:32.403882 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.404072 kubelet[2655]: I0517 00:50:32.404057 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1d05e6a1-f683-492a-9425-3efff5a40b11-registration-dir\") pod \"csi-node-driver-8ksxs\" (UID: \"1d05e6a1-f683-492a-9425-3efff5a40b11\") " pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:32.404803 kubelet[2655]: E0517 00:50:32.404786 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.404923 kubelet[2655]: W0517 00:50:32.404911 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.405009 kubelet[2655]: E0517 00:50:32.404998 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.406006 kubelet[2655]: E0517 00:50:32.405990 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.406129 kubelet[2655]: W0517 00:50:32.406116 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.406242 kubelet[2655]: E0517 00:50:32.406198 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.407034 kubelet[2655]: E0517 00:50:32.407016 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.407130 kubelet[2655]: W0517 00:50:32.407117 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.407225 kubelet[2655]: E0517 00:50:32.407214 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.407381 kubelet[2655]: I0517 00:50:32.407366 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbch\" (UniqueName: \"kubernetes.io/projected/1d05e6a1-f683-492a-9425-3efff5a40b11-kube-api-access-9nbch\") pod \"csi-node-driver-8ksxs\" (UID: \"1d05e6a1-f683-492a-9425-3efff5a40b11\") " pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:32.407612 kubelet[2655]: E0517 00:50:32.407600 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.407707 kubelet[2655]: W0517 00:50:32.407685 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.408178 kubelet[2655]: E0517 00:50:32.408161 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.408626 kubelet[2655]: E0517 00:50:32.408610 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.408730 kubelet[2655]: W0517 00:50:32.408717 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.408807 kubelet[2655]: E0517 00:50:32.408795 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.409340 kubelet[2655]: E0517 00:50:32.409324 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.409443 kubelet[2655]: W0517 00:50:32.409431 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.409535 kubelet[2655]: E0517 00:50:32.409523 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.436894 env[1585]: time="2025-05-17T00:50:32.436851062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4zf5,Uid:d67ded12-d32e-496f-b2c5-20f26c27069b,Namespace:calico-system,Attempt:0,}" May 17 00:50:32.486814 env[1585]: time="2025-05-17T00:50:32.486672064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:32.487047 env[1585]: time="2025-05-17T00:50:32.487001196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:32.487172 env[1585]: time="2025-05-17T00:50:32.487150423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:32.487478 env[1585]: time="2025-05-17T00:50:32.487442277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6 pid=3146 runtime=io.containerd.runc.v2 May 17 00:50:32.509812 kubelet[2655]: E0517 00:50:32.509625 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.509812 kubelet[2655]: W0517 00:50:32.509648 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.509812 kubelet[2655]: E0517 00:50:32.509678 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.510236 kubelet[2655]: E0517 00:50:32.510066 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.510236 kubelet[2655]: W0517 00:50:32.510080 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.510236 kubelet[2655]: E0517 00:50:32.510111 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.510703 kubelet[2655]: E0517 00:50:32.510432 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.510703 kubelet[2655]: W0517 00:50:32.510445 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.510703 kubelet[2655]: E0517 00:50:32.510474 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.511082 kubelet[2655]: E0517 00:50:32.510872 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.511082 kubelet[2655]: W0517 00:50:32.510886 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.511082 kubelet[2655]: E0517 00:50:32.510901 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.511430 kubelet[2655]: E0517 00:50:32.511245 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.511430 kubelet[2655]: W0517 00:50:32.511256 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.511430 kubelet[2655]: E0517 00:50:32.511270 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.511801 kubelet[2655]: E0517 00:50:32.511593 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.511801 kubelet[2655]: W0517 00:50:32.511607 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.511801 kubelet[2655]: E0517 00:50:32.511692 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.512146 kubelet[2655]: E0517 00:50:32.511955 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.512146 kubelet[2655]: W0517 00:50:32.511967 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.512146 kubelet[2655]: E0517 00:50:32.512057 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.512549 kubelet[2655]: E0517 00:50:32.512297 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.512549 kubelet[2655]: W0517 00:50:32.512309 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.512549 kubelet[2655]: E0517 00:50:32.512391 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.512959 kubelet[2655]: E0517 00:50:32.512732 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.512959 kubelet[2655]: W0517 00:50:32.512745 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.512959 kubelet[2655]: E0517 00:50:32.512859 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.513279 kubelet[2655]: E0517 00:50:32.513092 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.513279 kubelet[2655]: W0517 00:50:32.513104 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.513279 kubelet[2655]: E0517 00:50:32.513183 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.513617 kubelet[2655]: E0517 00:50:32.513438 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.513617 kubelet[2655]: W0517 00:50:32.513450 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.513617 kubelet[2655]: E0517 00:50:32.513535 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.513916 kubelet[2655]: E0517 00:50:32.513769 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.513916 kubelet[2655]: W0517 00:50:32.513780 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.513916 kubelet[2655]: E0517 00:50:32.513856 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.514175 kubelet[2655]: E0517 00:50:32.514162 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.514326 kubelet[2655]: W0517 00:50:32.514248 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.514404 kubelet[2655]: E0517 00:50:32.514390 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.514596 kubelet[2655]: E0517 00:50:32.514584 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.514691 kubelet[2655]: W0517 00:50:32.514677 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.514831 kubelet[2655]: E0517 00:50:32.514819 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.516278 kubelet[2655]: E0517 00:50:32.516261 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.516398 kubelet[2655]: W0517 00:50:32.516384 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.516588 kubelet[2655]: E0517 00:50:32.516562 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.516771 kubelet[2655]: E0517 00:50:32.516757 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.516841 kubelet[2655]: W0517 00:50:32.516830 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.517122 kubelet[2655]: E0517 00:50:32.517109 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.517265 kubelet[2655]: E0517 00:50:32.517255 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.517353 kubelet[2655]: W0517 00:50:32.517341 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.517447 kubelet[2655]: E0517 00:50:32.517427 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.518228 kubelet[2655]: E0517 00:50:32.518214 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.518324 kubelet[2655]: W0517 00:50:32.518299 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.518556 kubelet[2655]: E0517 00:50:32.518535 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.519094 kubelet[2655]: E0517 00:50:32.519081 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.519202 kubelet[2655]: W0517 00:50:32.519189 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.519313 kubelet[2655]: E0517 00:50:32.519286 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.519876 kubelet[2655]: E0517 00:50:32.519861 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.520249 kubelet[2655]: W0517 00:50:32.520237 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.520538 kubelet[2655]: E0517 00:50:32.520520 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.522957 kubelet[2655]: E0517 00:50:32.522939 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.523060 kubelet[2655]: W0517 00:50:32.523044 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.525607 kubelet[2655]: E0517 00:50:32.525580 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.525901 kubelet[2655]: E0517 00:50:32.525890 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.526010 kubelet[2655]: W0517 00:50:32.525997 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.526188 kubelet[2655]: E0517 00:50:32.526175 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.526433 kubelet[2655]: E0517 00:50:32.526421 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.526544 kubelet[2655]: W0517 00:50:32.526530 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.526732 kubelet[2655]: E0517 00:50:32.526720 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.527469 kubelet[2655]: E0517 00:50:32.527445 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.527616 kubelet[2655]: W0517 00:50:32.527603 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.527737 kubelet[2655]: E0517 00:50:32.527725 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.530028 kubelet[2655]: E0517 00:50:32.530001 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.530028 kubelet[2655]: W0517 00:50:32.530020 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.530131 kubelet[2655]: E0517 00:50:32.530036 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.533110 kubelet[2655]: E0517 00:50:32.533092 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:32.533269 kubelet[2655]: W0517 00:50:32.533213 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:32.533385 kubelet[2655]: E0517 00:50:32.533372 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:32.545580 env[1585]: time="2025-05-17T00:50:32.545537955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4zf5,Uid:d67ded12-d32e-496f-b2c5-20f26c27069b,Namespace:calico-system,Attempt:0,} returns sandbox id \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\"" May 17 00:50:33.061000 audit[3208]: NETFILTER_CFG table=filter:104 family=2 entries=21 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:33.069030 kernel: kauditd_printk_skb: 19 callbacks suppressed May 17 00:50:33.069139 kernel: audit: type=1325 audit(1747443033.061:307): table=filter:104 family=2 entries=21 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:33.061000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe1ec5430 a2=0 a3=1 items=0 ppid=2769 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:33.105464 systemd[1]: run-containerd-runc-k8s.io-c2441bdde4362b9c79db423876c440a483de0d06f43fb8ab3c77b3b878204df0-runc.zV6H0i.mount: Deactivated successfully. May 17 00:50:33.112882 kernel: audit: type=1300 audit(1747443033.061:307): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe1ec5430 a2=0 a3=1 items=0 ppid=2769 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:33.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:33.130456 kernel: audit: type=1327 audit(1747443033.061:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:33.113000 audit[3208]: NETFILTER_CFG table=nat:105 family=2 entries=12 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:33.145701 kernel: audit: type=1325 audit(1747443033.113:308): table=nat:105 family=2 entries=12 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:33.113000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1ec5430 a2=0 a3=1 items=0 ppid=2769 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:33.174547 kernel: audit: type=1300 audit(1747443033.113:308): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1ec5430 a2=0 a3=1 items=0 ppid=2769 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:33.174701 kernel: audit: type=1327 audit(1747443033.113:308): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:33.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:33.810633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721370752.mount: Deactivated successfully. May 17 00:50:34.487536 kubelet[2655]: E0517 00:50:34.484677 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:35.410437 env[1585]: time="2025-05-17T00:50:35.410394258Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:35.417288 env[1585]: time="2025-05-17T00:50:35.417252823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:35.420999 env[1585]: time="2025-05-17T00:50:35.420970843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:35.426815 env[1585]: time="2025-05-17T00:50:35.426782253Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:35.427372 env[1585]: time="2025-05-17T00:50:35.427343807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 00:50:35.429146 env[1585]: time="2025-05-17T00:50:35.429121424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:50:35.443110 env[1585]: time="2025-05-17T00:50:35.443074136Z" level=info msg="CreateContainer within sandbox \"c2441bdde4362b9c79db423876c440a483de0d06f43fb8ab3c77b3b878204df0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:50:35.489613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320348366.mount: Deactivated successfully. May 17 00:50:35.506810 env[1585]: time="2025-05-17T00:50:35.506759426Z" level=info msg="CreateContainer within sandbox \"c2441bdde4362b9c79db423876c440a483de0d06f43fb8ab3c77b3b878204df0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e096f1546419e759857ccd9ce9183061412d3c5052355a5b681b42b83d9e372\"" May 17 00:50:35.510099 env[1585]: time="2025-05-17T00:50:35.510053560Z" level=info msg="StartContainer for \"7e096f1546419e759857ccd9ce9183061412d3c5052355a5b681b42b83d9e372\"" May 17 00:50:35.576882 env[1585]: time="2025-05-17T00:50:35.575942872Z" level=info msg="StartContainer for \"7e096f1546419e759857ccd9ce9183061412d3c5052355a5b681b42b83d9e372\" returns successfully" May 17 00:50:35.708058 kubelet[2655]: E0517 00:50:35.707858 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.708058 kubelet[2655]: W0517 00:50:35.707879 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.708058 kubelet[2655]: E0517 00:50:35.707900 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.709092 kubelet[2655]: E0517 00:50:35.708507 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.709092 kubelet[2655]: W0517 00:50:35.708521 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.709092 kubelet[2655]: E0517 00:50:35.708533 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.709418 kubelet[2655]: E0517 00:50:35.709280 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.709418 kubelet[2655]: W0517 00:50:35.709296 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.709418 kubelet[2655]: E0517 00:50:35.709322 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.709777 kubelet[2655]: E0517 00:50:35.709593 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.709777 kubelet[2655]: W0517 00:50:35.709604 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.709777 kubelet[2655]: E0517 00:50:35.709614 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.710101 kubelet[2655]: E0517 00:50:35.709989 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.710101 kubelet[2655]: W0517 00:50:35.710002 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.710101 kubelet[2655]: E0517 00:50:35.710013 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.710417 kubelet[2655]: E0517 00:50:35.710266 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.710417 kubelet[2655]: W0517 00:50:35.710277 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.710417 kubelet[2655]: E0517 00:50:35.710287 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.710708 kubelet[2655]: E0517 00:50:35.710599 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.710708 kubelet[2655]: W0517 00:50:35.710611 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.710708 kubelet[2655]: E0517 00:50:35.710621 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.711053 kubelet[2655]: E0517 00:50:35.710877 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.711053 kubelet[2655]: W0517 00:50:35.710889 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.711053 kubelet[2655]: E0517 00:50:35.710899 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.711328 kubelet[2655]: E0517 00:50:35.711218 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.711328 kubelet[2655]: W0517 00:50:35.711229 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.711328 kubelet[2655]: E0517 00:50:35.711242 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.711710 kubelet[2655]: E0517 00:50:35.711555 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.711710 kubelet[2655]: W0517 00:50:35.711567 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.711710 kubelet[2655]: E0517 00:50:35.711578 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.711992 kubelet[2655]: E0517 00:50:35.711881 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.711992 kubelet[2655]: W0517 00:50:35.711894 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.711992 kubelet[2655]: E0517 00:50:35.711905 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.712336 kubelet[2655]: E0517 00:50:35.712173 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.712336 kubelet[2655]: W0517 00:50:35.712185 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.712336 kubelet[2655]: E0517 00:50:35.712196 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.712609 kubelet[2655]: E0517 00:50:35.712502 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.712609 kubelet[2655]: W0517 00:50:35.712514 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.712609 kubelet[2655]: E0517 00:50:35.712524 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.713396 kubelet[2655]: E0517 00:50:35.712782 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.713396 kubelet[2655]: W0517 00:50:35.712794 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.713396 kubelet[2655]: E0517 00:50:35.712804 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.713784 kubelet[2655]: E0517 00:50:35.713680 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.713784 kubelet[2655]: W0517 00:50:35.713694 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.713784 kubelet[2655]: E0517 00:50:35.713713 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.748540 kubelet[2655]: E0517 00:50:35.748449 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.748540 kubelet[2655]: W0517 00:50:35.748474 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.748540 kubelet[2655]: E0517 00:50:35.748508 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.748743 kubelet[2655]: E0517 00:50:35.748706 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.748743 kubelet[2655]: W0517 00:50:35.748715 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.748743 kubelet[2655]: E0517 00:50:35.748724 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.749013 kubelet[2655]: E0517 00:50:35.748993 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.749013 kubelet[2655]: W0517 00:50:35.749008 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.749109 kubelet[2655]: E0517 00:50:35.749024 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.749238 kubelet[2655]: E0517 00:50:35.749219 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.749238 kubelet[2655]: W0517 00:50:35.749232 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.749316 kubelet[2655]: E0517 00:50:35.749245 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.749505 kubelet[2655]: E0517 00:50:35.749475 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.749505 kubelet[2655]: W0517 00:50:35.749500 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.749600 kubelet[2655]: E0517 00:50:35.749520 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.752353 kubelet[2655]: E0517 00:50:35.752322 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.752353 kubelet[2655]: W0517 00:50:35.752344 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.752495 kubelet[2655]: E0517 00:50:35.752364 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.752659 kubelet[2655]: E0517 00:50:35.752635 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.752659 kubelet[2655]: W0517 00:50:35.752652 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.753088 kubelet[2655]: E0517 00:50:35.752772 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.753219 kubelet[2655]: E0517 00:50:35.753199 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.753219 kubelet[2655]: W0517 00:50:35.753215 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.753307 kubelet[2655]: E0517 00:50:35.753297 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.753433 kubelet[2655]: E0517 00:50:35.753418 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.753433 kubelet[2655]: W0517 00:50:35.753430 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.753528 kubelet[2655]: E0517 00:50:35.753522 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.753700 kubelet[2655]: E0517 00:50:35.753682 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.753700 kubelet[2655]: W0517 00:50:35.753697 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.753787 kubelet[2655]: E0517 00:50:35.753712 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.753912 kubelet[2655]: E0517 00:50:35.753895 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.753912 kubelet[2655]: W0517 00:50:35.753908 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.753988 kubelet[2655]: E0517 00:50:35.753924 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.754160 kubelet[2655]: E0517 00:50:35.754136 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.754160 kubelet[2655]: W0517 00:50:35.754153 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.754160 kubelet[2655]: E0517 00:50:35.754169 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.756786 kubelet[2655]: E0517 00:50:35.756759 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.756786 kubelet[2655]: W0517 00:50:35.756779 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.756914 kubelet[2655]: E0517 00:50:35.756798 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.757309 kubelet[2655]: E0517 00:50:35.757286 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.757309 kubelet[2655]: W0517 00:50:35.757305 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.757565 kubelet[2655]: E0517 00:50:35.757431 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.757704 kubelet[2655]: E0517 00:50:35.757683 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.757704 kubelet[2655]: W0517 00:50:35.757701 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.757795 kubelet[2655]: E0517 00:50:35.757713 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.757893 kubelet[2655]: E0517 00:50:35.757878 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.757893 kubelet[2655]: W0517 00:50:35.757891 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.757960 kubelet[2655]: E0517 00:50:35.757900 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.758145 kubelet[2655]: E0517 00:50:35.758125 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.758145 kubelet[2655]: W0517 00:50:35.758143 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.758224 kubelet[2655]: E0517 00:50:35.758155 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:35.758675 kubelet[2655]: E0517 00:50:35.758657 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:35.758675 kubelet[2655]: W0517 00:50:35.758674 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:35.758746 kubelet[2655]: E0517 00:50:35.758685 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.480899 kubelet[2655]: E0517 00:50:36.480614 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:36.651194 kubelet[2655]: I0517 00:50:36.650029 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7499c5b985-xlpmp" podStartSLOduration=2.6140455940000002 podStartE2EDuration="5.65001017s" podCreationTimestamp="2025-05-17 00:50:31 +0000 UTC" firstStartedPulling="2025-05-17 00:50:32.39275404 +0000 UTC m=+22.123942783" lastFinishedPulling="2025-05-17 00:50:35.428718616 +0000 UTC m=+25.159907359" observedRunningTime="2025-05-17 00:50:35.645995968 +0000 UTC m=+25.377184711" watchObservedRunningTime="2025-05-17 00:50:36.65001017 +0000 UTC m=+26.381198913" May 17 00:50:36.689000 audit[3293]: NETFILTER_CFG table=filter:106 family=2 entries=21 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:36.705886 kernel: audit: type=1325 audit(1747443036.689:309): table=filter:106 family=2 entries=21 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:36.733508 kernel: audit: type=1300 audit(1747443036.689:309): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeba86df0 a2=0 a3=1 items=0 ppid=2769 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:36.689000 audit[3293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeba86df0 a2=0 a3=1 items=0 ppid=2769 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:36.734634 kubelet[2655]: E0517 00:50:36.734412 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.734634 kubelet[2655]: W0517 00:50:36.734438 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.734634 kubelet[2655]: E0517 00:50:36.734460 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.735164 kubelet[2655]: E0517 00:50:36.735015 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.735164 kubelet[2655]: W0517 00:50:36.735029 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.735164 kubelet[2655]: E0517 00:50:36.735042 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.735469 kubelet[2655]: E0517 00:50:36.735315 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.735469 kubelet[2655]: W0517 00:50:36.735327 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.735469 kubelet[2655]: E0517 00:50:36.735337 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.735832 kubelet[2655]: E0517 00:50:36.735662 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.735832 kubelet[2655]: W0517 00:50:36.735675 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.735832 kubelet[2655]: E0517 00:50:36.735686 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.736106 kubelet[2655]: E0517 00:50:36.735973 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.736106 kubelet[2655]: W0517 00:50:36.735985 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.736106 kubelet[2655]: E0517 00:50:36.735995 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.736374 kubelet[2655]: E0517 00:50:36.736234 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.736374 kubelet[2655]: W0517 00:50:36.736246 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.736374 kubelet[2655]: E0517 00:50:36.736256 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.736684 kubelet[2655]: E0517 00:50:36.736537 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.736684 kubelet[2655]: W0517 00:50:36.736551 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.736684 kubelet[2655]: E0517 00:50:36.736563 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.736974 kubelet[2655]: E0517 00:50:36.736824 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.736974 kubelet[2655]: W0517 00:50:36.736836 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.736974 kubelet[2655]: E0517 00:50:36.736846 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.737244 kubelet[2655]: E0517 00:50:36.737109 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.737244 kubelet[2655]: W0517 00:50:36.737121 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.737244 kubelet[2655]: E0517 00:50:36.737132 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.737542 kubelet[2655]: E0517 00:50:36.737379 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.737542 kubelet[2655]: W0517 00:50:36.737391 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.737542 kubelet[2655]: E0517 00:50:36.737401 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.737832 kubelet[2655]: E0517 00:50:36.737689 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.737832 kubelet[2655]: W0517 00:50:36.737702 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.737832 kubelet[2655]: E0517 00:50:36.737712 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.738124 kubelet[2655]: E0517 00:50:36.737979 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.738124 kubelet[2655]: W0517 00:50:36.737990 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.738124 kubelet[2655]: E0517 00:50:36.738000 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.738396 kubelet[2655]: E0517 00:50:36.738262 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.738396 kubelet[2655]: W0517 00:50:36.738273 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.738396 kubelet[2655]: E0517 00:50:36.738283 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.738701 kubelet[2655]: E0517 00:50:36.738558 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.738701 kubelet[2655]: W0517 00:50:36.738571 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.738701 kubelet[2655]: E0517 00:50:36.738581 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.738920 kubelet[2655]: E0517 00:50:36.738842 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.738920 kubelet[2655]: W0517 00:50:36.738853 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.738920 kubelet[2655]: E0517 00:50:36.738863 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:36.752805 kernel: audit: type=1327 audit(1747443036.689:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:36.753511 kernel: audit: type=1325 audit(1747443036.744:310): table=nat:107 family=2 entries=19 op=nft_register_chain pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:36.744000 audit[3293]: NETFILTER_CFG table=nat:107 family=2 entries=19 op=nft_register_chain pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:36.744000 audit[3293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffeba86df0 a2=0 a3=1 items=0 ppid=2769 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:36.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:36.768046 kubelet[2655]: E0517 00:50:36.768020 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.768144 kubelet[2655]: W0517 00:50:36.768127 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.768218 kubelet[2655]: E0517 00:50:36.768205 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.768610 kubelet[2655]: E0517 00:50:36.768595 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.768704 kubelet[2655]: W0517 00:50:36.768692 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.768777 kubelet[2655]: E0517 00:50:36.768766 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.768975 kubelet[2655]: E0517 00:50:36.768936 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.768975 kubelet[2655]: W0517 00:50:36.768953 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.768975 kubelet[2655]: E0517 00:50:36.768971 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.769227 kubelet[2655]: E0517 00:50:36.769122 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.769227 kubelet[2655]: W0517 00:50:36.769136 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.769227 kubelet[2655]: E0517 00:50:36.769148 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.769335 kubelet[2655]: E0517 00:50:36.769325 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.769335 kubelet[2655]: W0517 00:50:36.769333 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.769384 kubelet[2655]: E0517 00:50:36.769344 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.769502 kubelet[2655]: E0517 00:50:36.769471 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.769502 kubelet[2655]: W0517 00:50:36.769495 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.769580 kubelet[2655]: E0517 00:50:36.769505 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.769868 kubelet[2655]: E0517 00:50:36.769634 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.769868 kubelet[2655]: W0517 00:50:36.769648 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.769868 kubelet[2655]: E0517 00:50:36.769665 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.769868 kubelet[2655]: E0517 00:50:36.769818 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.769868 kubelet[2655]: W0517 00:50:36.769826 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.769868 kubelet[2655]: E0517 00:50:36.769834 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.770280 kubelet[2655]: E0517 00:50:36.770253 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.770280 kubelet[2655]: W0517 00:50:36.770275 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.770377 kubelet[2655]: E0517 00:50:36.770355 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.770747 kubelet[2655]: E0517 00:50:36.770471 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.770747 kubelet[2655]: W0517 00:50:36.770496 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.770747 kubelet[2655]: E0517 00:50:36.770568 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.770747 kubelet[2655]: E0517 00:50:36.770664 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.770747 kubelet[2655]: W0517 00:50:36.770671 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.770747 kubelet[2655]: E0517 00:50:36.770681 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.770960 kubelet[2655]: E0517 00:50:36.770814 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.770960 kubelet[2655]: W0517 00:50:36.770824 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.770960 kubelet[2655]: E0517 00:50:36.770832 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.771028 kubelet[2655]: E0517 00:50:36.770961 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.771028 kubelet[2655]: W0517 00:50:36.770969 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.771028 kubelet[2655]: E0517 00:50:36.770982 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.771196 kubelet[2655]: E0517 00:50:36.771170 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.771196 kubelet[2655]: W0517 00:50:36.771185 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.771196 kubelet[2655]: E0517 00:50:36.771197 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.771648 kubelet[2655]: E0517 00:50:36.771630 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.771648 kubelet[2655]: W0517 00:50:36.771646 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.771766 kubelet[2655]: E0517 00:50:36.771751 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.771841 kubelet[2655]: E0517 00:50:36.771795 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.771910 kubelet[2655]: W0517 00:50:36.771895 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.771967 kubelet[2655]: E0517 00:50:36.771956 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.772382 kubelet[2655]: E0517 00:50:36.772361 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.772451 env[1585]: time="2025-05-17T00:50:36.772394727Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:36.772806 kubelet[2655]: W0517 00:50:36.772785 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.772893 kubelet[2655]: E0517 00:50:36.772881 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.773344 kubelet[2655]: E0517 00:50:36.773327 2655 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:50:36.773430 kubelet[2655]: W0517 00:50:36.773417 2655 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:50:36.773562 kubelet[2655]: E0517 00:50:36.773528 2655 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:50:36.780999 env[1585]: time="2025-05-17T00:50:36.780700913Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:36.784934 env[1585]: time="2025-05-17T00:50:36.784883743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:36.790207 env[1585]: time="2025-05-17T00:50:36.790175407Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:36.790580 env[1585]: time="2025-05-17T00:50:36.790550537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 00:50:36.792949 env[1585]: time="2025-05-17T00:50:36.792722886Z" level=info msg="CreateContainer within sandbox \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:50:36.819937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810156282.mount: Deactivated successfully. May 17 00:50:36.839669 env[1585]: time="2025-05-17T00:50:36.839625230Z" level=info msg="CreateContainer within sandbox \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5\"" May 17 00:50:36.841545 env[1585]: time="2025-05-17T00:50:36.840209184Z" level=info msg="StartContainer for \"3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5\"" May 17 00:50:36.902589 env[1585]: time="2025-05-17T00:50:36.901840289Z" level=info msg="StartContainer for \"3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5\" returns successfully" May 17 00:50:37.434459 systemd[1]: run-containerd-runc-k8s.io-3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5-runc.rMjRJo.mount: Deactivated successfully. May 17 00:50:37.434615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5-rootfs.mount: Deactivated successfully. May 17 00:50:38.046250 env[1585]: time="2025-05-17T00:50:38.046013245Z" level=info msg="shim disconnected" id=3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5 May 17 00:50:38.046250 env[1585]: time="2025-05-17T00:50:38.046083320Z" level=warning msg="cleaning up after shim disconnected" id=3f8f694ea71f0e7664afc0a3402e617877bfb7c04dc143f9c60a7a2602e163a5 namespace=k8s.io May 17 00:50:38.046250 env[1585]: time="2025-05-17T00:50:38.046094519Z" level=info msg="cleaning up dead shim" May 17 00:50:38.053716 env[1585]: time="2025-05-17T00:50:38.053674511Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3383 runtime=io.containerd.runc.v2\n" May 17 00:50:38.481453 kubelet[2655]: E0517 00:50:38.481394 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:38.631658 env[1585]: time="2025-05-17T00:50:38.631418792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:50:40.482997 kubelet[2655]: E0517 00:50:40.482939 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:41.273267 env[1585]: time="2025-05-17T00:50:41.273220949Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:41.281860 env[1585]: time="2025-05-17T00:50:41.281824912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:41.289218 env[1585]: time="2025-05-17T00:50:41.289184241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:41.294905 env[1585]: time="2025-05-17T00:50:41.294872847Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:41.295692 env[1585]: time="2025-05-17T00:50:41.295666712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 00:50:41.299470 env[1585]: time="2025-05-17T00:50:41.299432050Z" level=info msg="CreateContainer within sandbox \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:50:41.328830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771168263.mount: Deactivated successfully. May 17 00:50:41.344785 env[1585]: time="2025-05-17T00:50:41.344733268Z" level=info msg="CreateContainer within sandbox \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f\"" May 17 00:50:41.345316 env[1585]: time="2025-05-17T00:50:41.345290669Z" level=info msg="StartContainer for \"0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f\"" May 17 00:50:41.413518 env[1585]: time="2025-05-17T00:50:41.412778867Z" level=info msg="StartContainer for \"0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f\" returns successfully" May 17 00:50:42.326061 systemd[1]: run-containerd-runc-k8s.io-0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f-runc.G6SjWt.mount: Deactivated successfully. May 17 00:50:42.480572 kubelet[2655]: E0517 00:50:42.480533 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:42.699368 env[1585]: time="2025-05-17T00:50:42.699241775Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:50:42.717448 kubelet[2655]: I0517 00:50:42.715903 2655 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:50:42.737335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f-rootfs.mount: Deactivated successfully. May 17 00:50:42.926340 kubelet[2655]: I0517 00:50:42.926293 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/91ca7308-a9d6-4eb9-b8c4-045158a71d72-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-7vz96\" (UID: \"91ca7308-a9d6-4eb9-b8c4-045158a71d72\") " pod="calico-system/goldmane-8f77d7b6c-7vz96" May 17 00:50:42.926340 kubelet[2655]: I0517 00:50:42.926345 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8dp\" (UniqueName: \"kubernetes.io/projected/6298c7d9-10fe-4278-8b14-e26174630105-kube-api-access-5j8dp\") pod \"whisker-75d6776cf8-sl7xk\" (UID: \"6298c7d9-10fe-4278-8b14-e26174630105\") " pod="calico-system/whisker-75d6776cf8-sl7xk" May 17 00:50:43.539051 kubelet[2655]: I0517 00:50:42.926369 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f1c698e-516f-4fe7-9f6f-d4affc1db4e0-config-volume\") pod \"coredns-7c65d6cfc9-9b8l5\" (UID: \"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0\") " pod="kube-system/coredns-7c65d6cfc9-9b8l5" May 17 00:50:43.539051 kubelet[2655]: I0517 00:50:42.926391 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6298c7d9-10fe-4278-8b14-e26174630105-whisker-ca-bundle\") pod \"whisker-75d6776cf8-sl7xk\" (UID: \"6298c7d9-10fe-4278-8b14-e26174630105\") " pod="calico-system/whisker-75d6776cf8-sl7xk" May 17 00:50:43.539051 kubelet[2655]: I0517 00:50:42.926410 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91ca7308-a9d6-4eb9-b8c4-045158a71d72-config\") pod \"goldmane-8f77d7b6c-7vz96\" (UID: \"91ca7308-a9d6-4eb9-b8c4-045158a71d72\") " pod="calico-system/goldmane-8f77d7b6c-7vz96" May 17 00:50:43.539051 kubelet[2655]: I0517 00:50:42.926426 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/627944b6-ba25-4900-b922-e4dd80aebc0f-calico-apiserver-certs\") pod \"calico-apiserver-76f68fbb46-wvb9m\" (UID: \"627944b6-ba25-4900-b922-e4dd80aebc0f\") " pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" May 17 00:50:43.539051 kubelet[2655]: I0517 00:50:42.926444 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tf7\" (UniqueName: \"kubernetes.io/projected/8dec4dde-a041-4dab-a0cc-b7a73829dbcb-kube-api-access-69tf7\") pod \"coredns-7c65d6cfc9-kkp5z\" (UID: \"8dec4dde-a041-4dab-a0cc-b7a73829dbcb\") " pod="kube-system/coredns-7c65d6cfc9-kkp5z" May 17 00:50:43.539200 kubelet[2655]: I0517 00:50:42.926460 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a468b308-fb40-4092-92f2-ce4832332fb4-calico-apiserver-certs\") pod \"calico-apiserver-76f68fbb46-gf7hk\" (UID: \"a468b308-fb40-4092-92f2-ce4832332fb4\") " pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" May 17 00:50:43.539200 kubelet[2655]: I0517 00:50:42.926476 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ca7308-a9d6-4eb9-b8c4-045158a71d72-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-7vz96\" (UID: \"91ca7308-a9d6-4eb9-b8c4-045158a71d72\") " pod="calico-system/goldmane-8f77d7b6c-7vz96" May 17 00:50:43.539200 kubelet[2655]: I0517 00:50:42.926517 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dkk\" (UniqueName: \"kubernetes.io/projected/bc9691d8-0209-4d9d-befa-8a2f81686ebd-kube-api-access-k5dkk\") pod \"calico-kube-controllers-545d5864b5-tt9qd\" (UID: \"bc9691d8-0209-4d9d-befa-8a2f81686ebd\") " pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" May 17 00:50:43.539200 kubelet[2655]: I0517 00:50:42.926536 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8dec4dde-a041-4dab-a0cc-b7a73829dbcb-config-volume\") pod \"coredns-7c65d6cfc9-kkp5z\" (UID: \"8dec4dde-a041-4dab-a0cc-b7a73829dbcb\") " pod="kube-system/coredns-7c65d6cfc9-kkp5z" May 17 00:50:43.539200 kubelet[2655]: I0517 00:50:42.926553 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmmf5\" (UniqueName: \"kubernetes.io/projected/7f1c698e-516f-4fe7-9f6f-d4affc1db4e0-kube-api-access-kmmf5\") pod \"coredns-7c65d6cfc9-9b8l5\" (UID: \"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0\") " pod="kube-system/coredns-7c65d6cfc9-9b8l5" May 17 00:50:43.539320 kubelet[2655]: I0517 00:50:42.926572 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc9691d8-0209-4d9d-befa-8a2f81686ebd-tigera-ca-bundle\") pod \"calico-kube-controllers-545d5864b5-tt9qd\" (UID: \"bc9691d8-0209-4d9d-befa-8a2f81686ebd\") " pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" May 17 00:50:43.539320 kubelet[2655]: I0517 00:50:42.926589 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6lg\" (UniqueName: \"kubernetes.io/projected/a468b308-fb40-4092-92f2-ce4832332fb4-kube-api-access-xk6lg\") pod \"calico-apiserver-76f68fbb46-gf7hk\" (UID: \"a468b308-fb40-4092-92f2-ce4832332fb4\") " pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" May 17 00:50:43.539320 kubelet[2655]: I0517 00:50:42.926607 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6298c7d9-10fe-4278-8b14-e26174630105-whisker-backend-key-pair\") pod \"whisker-75d6776cf8-sl7xk\" (UID: \"6298c7d9-10fe-4278-8b14-e26174630105\") " pod="calico-system/whisker-75d6776cf8-sl7xk" May 17 00:50:43.539320 kubelet[2655]: I0517 00:50:42.926624 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbrwb\" (UniqueName: \"kubernetes.io/projected/91ca7308-a9d6-4eb9-b8c4-045158a71d72-kube-api-access-bbrwb\") pod \"goldmane-8f77d7b6c-7vz96\" (UID: \"91ca7308-a9d6-4eb9-b8c4-045158a71d72\") " pod="calico-system/goldmane-8f77d7b6c-7vz96" May 17 00:50:43.539320 kubelet[2655]: I0517 00:50:42.926639 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqgd\" (UniqueName: \"kubernetes.io/projected/627944b6-ba25-4900-b922-e4dd80aebc0f-kube-api-access-dpqgd\") pod \"calico-apiserver-76f68fbb46-wvb9m\" (UID: \"627944b6-ba25-4900-b922-e4dd80aebc0f\") " pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" May 17 00:50:43.680673 env[1585]: time="2025-05-17T00:50:43.680607857Z" level=info msg="shim disconnected" id=0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f May 17 00:50:43.680673 env[1585]: time="2025-05-17T00:50:43.680671933Z" level=warning msg="cleaning up after shim disconnected" id=0e8c32c06eb92e4ed79b11a9e14d12ed7603dfcd6f391c83bc3498c47878ae7f namespace=k8s.io May 17 00:50:43.680834 env[1585]: time="2025-05-17T00:50:43.680681813Z" level=info msg="cleaning up dead shim" May 17 00:50:43.704496 env[1585]: time="2025-05-17T00:50:43.701579113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7vz96,Uid:91ca7308-a9d6-4eb9-b8c4-045158a71d72,Namespace:calico-system,Attempt:0,}" May 17 00:50:43.704496 env[1585]: time="2025-05-17T00:50:43.702011405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kkp5z,Uid:8dec4dde-a041-4dab-a0cc-b7a73829dbcb,Namespace:kube-system,Attempt:0,}" May 17 00:50:43.710080 env[1585]: time="2025-05-17T00:50:43.710023916Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3463 runtime=io.containerd.runc.v2\n" May 17 00:50:43.798336 env[1585]: time="2025-05-17T00:50:43.797993509Z" level=error msg="Failed to destroy network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.798685 env[1585]: time="2025-05-17T00:50:43.798626387Z" level=error msg="encountered an error cleaning up failed sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.798748 env[1585]: time="2025-05-17T00:50:43.798704102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7vz96,Uid:91ca7308-a9d6-4eb9-b8c4-045158a71d72,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.799019 kubelet[2655]: E0517 00:50:43.798950 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.799098 kubelet[2655]: E0517 00:50:43.799032 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-7vz96" May 17 00:50:43.799098 kubelet[2655]: E0517 00:50:43.799067 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-7vz96" May 17 00:50:43.799195 kubelet[2655]: E0517 00:50:43.799120 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:50:43.823716 env[1585]: time="2025-05-17T00:50:43.823662335Z" level=error msg="Failed to destroy network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.824183 env[1585]: time="2025-05-17T00:50:43.824153462Z" level=error msg="encountered an error cleaning up failed sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.824311 env[1585]: time="2025-05-17T00:50:43.824286014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kkp5z,Uid:8dec4dde-a041-4dab-a0cc-b7a73829dbcb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.824625 kubelet[2655]: E0517 00:50:43.824579 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:43.824717 kubelet[2655]: E0517 00:50:43.824646 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kkp5z" May 17 00:50:43.824717 kubelet[2655]: E0517 00:50:43.824671 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kkp5z" May 17 00:50:43.824774 kubelet[2655]: E0517 00:50:43.824711 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kkp5z_kube-system(8dec4dde-a041-4dab-a0cc-b7a73829dbcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kkp5z_kube-system(8dec4dde-a041-4dab-a0cc-b7a73829dbcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kkp5z" podUID="8dec4dde-a041-4dab-a0cc-b7a73829dbcb" May 17 00:50:43.967564 env[1585]: time="2025-05-17T00:50:43.967526199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9b8l5,Uid:7f1c698e-516f-4fe7-9f6f-d4affc1db4e0,Namespace:kube-system,Attempt:0,}" May 17 00:50:43.975192 env[1585]: time="2025-05-17T00:50:43.975147616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-gf7hk,Uid:a468b308-fb40-4092-92f2-ce4832332fb4,Namespace:calico-apiserver,Attempt:0,}" May 17 00:50:43.990211 env[1585]: time="2025-05-17T00:50:43.990163785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-wvb9m,Uid:627944b6-ba25-4900-b922-e4dd80aebc0f,Namespace:calico-apiserver,Attempt:0,}" May 17 00:50:43.993043 env[1585]: time="2025-05-17T00:50:43.993003677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d6776cf8-sl7xk,Uid:6298c7d9-10fe-4278-8b14-e26174630105,Namespace:calico-system,Attempt:0,}" May 17 00:50:43.999068 env[1585]: time="2025-05-17T00:50:43.999031519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545d5864b5-tt9qd,Uid:bc9691d8-0209-4d9d-befa-8a2f81686ebd,Namespace:calico-system,Attempt:0,}" May 17 00:50:44.095729 env[1585]: time="2025-05-17T00:50:44.095080330Z" level=error msg="Failed to destroy network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.096975 env[1585]: time="2025-05-17T00:50:44.096926451Z" level=error msg="encountered an error cleaning up failed sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.097039 env[1585]: time="2025-05-17T00:50:44.096994166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9b8l5,Uid:7f1c698e-516f-4fe7-9f6f-d4affc1db4e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.097235 kubelet[2655]: E0517 00:50:44.097199 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.097292 kubelet[2655]: E0517 00:50:44.097262 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9b8l5" May 17 00:50:44.097292 kubelet[2655]: E0517 00:50:44.097279 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9b8l5" May 17 00:50:44.097366 kubelet[2655]: E0517 00:50:44.097324 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9b8l5_kube-system(7f1c698e-516f-4fe7-9f6f-d4affc1db4e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9b8l5_kube-system(7f1c698e-516f-4fe7-9f6f-d4affc1db4e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9b8l5" podUID="7f1c698e-516f-4fe7-9f6f-d4affc1db4e0" May 17 00:50:44.156759 env[1585]: time="2025-05-17T00:50:44.156698521Z" level=error msg="Failed to destroy network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.157251 env[1585]: time="2025-05-17T00:50:44.157216608Z" level=error msg="encountered an error cleaning up failed sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.157384 env[1585]: time="2025-05-17T00:50:44.157356919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-gf7hk,Uid:a468b308-fb40-4092-92f2-ce4832332fb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.159684 kubelet[2655]: E0517 00:50:44.157778 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.159684 kubelet[2655]: E0517 00:50:44.157851 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" May 17 00:50:44.159684 kubelet[2655]: E0517 00:50:44.157881 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" May 17 00:50:44.159858 kubelet[2655]: E0517 00:50:44.157934 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f68fbb46-gf7hk_calico-apiserver(a468b308-fb40-4092-92f2-ce4832332fb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f68fbb46-gf7hk_calico-apiserver(a468b308-fb40-4092-92f2-ce4832332fb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" podUID="a468b308-fb40-4092-92f2-ce4832332fb4" May 17 00:50:44.205973 env[1585]: time="2025-05-17T00:50:44.205910592Z" level=error msg="Failed to destroy network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.206701 env[1585]: time="2025-05-17T00:50:44.206660824Z" level=error msg="encountered an error cleaning up failed sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.206751 env[1585]: time="2025-05-17T00:50:44.206718580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-wvb9m,Uid:627944b6-ba25-4900-b922-e4dd80aebc0f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.208033 kubelet[2655]: E0517 00:50:44.206944 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.208033 kubelet[2655]: E0517 00:50:44.207000 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" May 17 00:50:44.208033 kubelet[2655]: E0517 00:50:44.207019 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" May 17 00:50:44.208214 kubelet[2655]: E0517 00:50:44.207058 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76f68fbb46-wvb9m_calico-apiserver(627944b6-ba25-4900-b922-e4dd80aebc0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76f68fbb46-wvb9m_calico-apiserver(627944b6-ba25-4900-b922-e4dd80aebc0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" podUID="627944b6-ba25-4900-b922-e4dd80aebc0f" May 17 00:50:44.232192 env[1585]: time="2025-05-17T00:50:44.232098865Z" level=error msg="Failed to destroy network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.233975 env[1585]: time="2025-05-17T00:50:44.232465482Z" level=error msg="encountered an error cleaning up failed sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.233975 env[1585]: time="2025-05-17T00:50:44.232530398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d6776cf8-sl7xk,Uid:6298c7d9-10fe-4278-8b14-e26174630105,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.234181 kubelet[2655]: E0517 00:50:44.233627 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.234181 kubelet[2655]: E0517 00:50:44.233683 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d6776cf8-sl7xk" May 17 00:50:44.234181 kubelet[2655]: E0517 00:50:44.233718 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d6776cf8-sl7xk" May 17 00:50:44.234272 kubelet[2655]: E0517 00:50:44.233759 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75d6776cf8-sl7xk_calico-system(6298c7d9-10fe-4278-8b14-e26174630105)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75d6776cf8-sl7xk_calico-system(6298c7d9-10fe-4278-8b14-e26174630105)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d6776cf8-sl7xk" podUID="6298c7d9-10fe-4278-8b14-e26174630105" May 17 00:50:44.245204 env[1585]: time="2025-05-17T00:50:44.245152185Z" level=error msg="Failed to destroy network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.245692 env[1585]: time="2025-05-17T00:50:44.245660832Z" level=error msg="encountered an error cleaning up failed sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.245829 env[1585]: time="2025-05-17T00:50:44.245800703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545d5864b5-tt9qd,Uid:bc9691d8-0209-4d9d-befa-8a2f81686ebd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.246154 kubelet[2655]: E0517 00:50:44.246110 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.246251 kubelet[2655]: E0517 00:50:44.246174 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" May 17 00:50:44.246251 kubelet[2655]: E0517 00:50:44.246192 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" May 17 00:50:44.246308 kubelet[2655]: E0517 00:50:44.246244 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-545d5864b5-tt9qd_calico-system(bc9691d8-0209-4d9d-befa-8a2f81686ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-545d5864b5-tt9qd_calico-system(bc9691d8-0209-4d9d-befa-8a2f81686ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" podUID="bc9691d8-0209-4d9d-befa-8a2f81686ebd" May 17 00:50:44.484480 env[1585]: time="2025-05-17T00:50:44.484373138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ksxs,Uid:1d05e6a1-f683-492a-9425-3efff5a40b11,Namespace:calico-system,Attempt:0,}" May 17 00:50:44.561899 env[1585]: time="2025-05-17T00:50:44.561846389Z" level=error msg="Failed to destroy network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.562373 env[1585]: time="2025-05-17T00:50:44.562341557Z" level=error msg="encountered an error cleaning up failed sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.562514 env[1585]: time="2025-05-17T00:50:44.562465869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ksxs,Uid:1d05e6a1-f683-492a-9425-3efff5a40b11,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.562853 kubelet[2655]: E0517 00:50:44.562812 2655 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.563131 kubelet[2655]: E0517 00:50:44.562888 2655 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:44.563131 kubelet[2655]: E0517 00:50:44.562909 2655 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8ksxs" May 17 00:50:44.563131 kubelet[2655]: E0517 00:50:44.562964 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8ksxs_calico-system(1d05e6a1-f683-492a-9425-3efff5a40b11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8ksxs_calico-system(1d05e6a1-f683-492a-9425-3efff5a40b11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:44.642735 kubelet[2655]: I0517 00:50:44.641542 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:50:44.643228 env[1585]: time="2025-05-17T00:50:44.643198150Z" level=info msg="StopPodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\"" May 17 00:50:44.644976 kubelet[2655]: I0517 00:50:44.644624 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:50:44.645227 env[1585]: time="2025-05-17T00:50:44.645201261Z" level=info msg="StopPodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\"" May 17 00:50:44.646563 kubelet[2655]: I0517 00:50:44.646248 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:50:44.646785 env[1585]: time="2025-05-17T00:50:44.646763320Z" level=info msg="StopPodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\"" May 17 00:50:44.655080 env[1585]: time="2025-05-17T00:50:44.655044427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:50:44.658922 kubelet[2655]: I0517 00:50:44.658519 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:50:44.659639 env[1585]: time="2025-05-17T00:50:44.659610293Z" level=info msg="StopPodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\"" May 17 00:50:44.664849 kubelet[2655]: I0517 00:50:44.664447 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:50:44.665207 env[1585]: time="2025-05-17T00:50:44.665176174Z" level=info msg="StopPodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\"" May 17 00:50:44.666910 kubelet[2655]: I0517 00:50:44.666553 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:50:44.667263 env[1585]: time="2025-05-17T00:50:44.667233402Z" level=info msg="StopPodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\"" May 17 00:50:44.690032 kubelet[2655]: I0517 00:50:44.690000 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:50:44.694063 env[1585]: time="2025-05-17T00:50:44.694023396Z" level=info msg="StopPodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\"" May 17 00:50:44.704676 kubelet[2655]: I0517 00:50:44.704634 2655 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:50:44.707098 env[1585]: time="2025-05-17T00:50:44.707037758Z" level=info msg="StopPodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\"" May 17 00:50:44.742220 env[1585]: time="2025-05-17T00:50:44.742165176Z" level=error msg="StopPodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" failed" error="failed to destroy network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.742634 kubelet[2655]: E0517 00:50:44.742588 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:50:44.742735 kubelet[2655]: E0517 00:50:44.742650 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0"} May 17 00:50:44.742735 kubelet[2655]: E0517 00:50:44.742720 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc9691d8-0209-4d9d-befa-8a2f81686ebd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.742830 kubelet[2655]: E0517 00:50:44.742751 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc9691d8-0209-4d9d-befa-8a2f81686ebd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" podUID="bc9691d8-0209-4d9d-befa-8a2f81686ebd" May 17 00:50:44.754924 env[1585]: time="2025-05-17T00:50:44.754863878Z" level=error msg="StopPodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" failed" error="failed to destroy network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.755371 kubelet[2655]: E0517 00:50:44.755304 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:50:44.755460 kubelet[2655]: E0517 00:50:44.755375 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b"} May 17 00:50:44.755460 kubelet[2655]: E0517 00:50:44.755429 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6298c7d9-10fe-4278-8b14-e26174630105\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.755573 kubelet[2655]: E0517 00:50:44.755453 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6298c7d9-10fe-4278-8b14-e26174630105\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d6776cf8-sl7xk" podUID="6298c7d9-10fe-4278-8b14-e26174630105" May 17 00:50:44.802938 env[1585]: time="2025-05-17T00:50:44.802879506Z" level=error msg="StopPodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" failed" error="failed to destroy network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.803359 kubelet[2655]: E0517 00:50:44.803313 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:50:44.803440 kubelet[2655]: E0517 00:50:44.803379 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6"} May 17 00:50:44.803440 kubelet[2655]: E0517 00:50:44.803415 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91ca7308-a9d6-4eb9-b8c4-045158a71d72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.803548 kubelet[2655]: E0517 00:50:44.803446 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91ca7308-a9d6-4eb9-b8c4-045158a71d72\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:50:44.823347 env[1585]: time="2025-05-17T00:50:44.823285112Z" level=error msg="StopPodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" failed" error="failed to destroy network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.823697 env[1585]: time="2025-05-17T00:50:44.823589852Z" level=error msg="StopPodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" failed" error="failed to destroy network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.823885 kubelet[2655]: E0517 00:50:44.823843 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:50:44.823969 kubelet[2655]: E0517 00:50:44.823907 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8"} May 17 00:50:44.823969 kubelet[2655]: E0517 00:50:44.823941 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a468b308-fb40-4092-92f2-ce4832332fb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.824053 kubelet[2655]: E0517 00:50:44.823980 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a468b308-fb40-4092-92f2-ce4832332fb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" podUID="a468b308-fb40-4092-92f2-ce4832332fb4" May 17 00:50:44.824146 kubelet[2655]: E0517 00:50:44.824118 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:50:44.824195 kubelet[2655]: E0517 00:50:44.824158 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c"} May 17 00:50:44.824195 kubelet[2655]: E0517 00:50:44.824186 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"627944b6-ba25-4900-b922-e4dd80aebc0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.824263 kubelet[2655]: E0517 00:50:44.824202 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"627944b6-ba25-4900-b922-e4dd80aebc0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" podUID="627944b6-ba25-4900-b922-e4dd80aebc0f" May 17 00:50:44.824707 env[1585]: time="2025-05-17T00:50:44.824663983Z" level=error msg="StopPodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" failed" error="failed to destroy network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.824860 kubelet[2655]: E0517 00:50:44.824817 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:50:44.824924 kubelet[2655]: E0517 00:50:44.824864 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78"} May 17 00:50:44.824924 kubelet[2655]: E0517 00:50:44.824889 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d05e6a1-f683-492a-9425-3efff5a40b11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.824994 kubelet[2655]: E0517 00:50:44.824911 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d05e6a1-f683-492a-9425-3efff5a40b11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8ksxs" podUID="1d05e6a1-f683-492a-9425-3efff5a40b11" May 17 00:50:44.826359 env[1585]: time="2025-05-17T00:50:44.826320316Z" level=error msg="StopPodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" failed" error="failed to destroy network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.826643 kubelet[2655]: E0517 00:50:44.826597 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:50:44.826768 kubelet[2655]: E0517 00:50:44.826647 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b"} May 17 00:50:44.826768 kubelet[2655]: E0517 00:50:44.826671 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.826768 kubelet[2655]: E0517 00:50:44.826701 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9b8l5" podUID="7f1c698e-516f-4fe7-9f6f-d4affc1db4e0" May 17 00:50:44.827405 env[1585]: time="2025-05-17T00:50:44.827366089Z" level=error msg="StopPodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" failed" error="failed to destroy network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:50:44.827593 kubelet[2655]: E0517 00:50:44.827564 2655 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:50:44.827659 kubelet[2655]: E0517 00:50:44.827598 2655 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772"} May 17 00:50:44.827659 kubelet[2655]: E0517 00:50:44.827638 2655 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8dec4dde-a041-4dab-a0cc-b7a73829dbcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:50:44.827728 kubelet[2655]: E0517 00:50:44.827656 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8dec4dde-a041-4dab-a0cc-b7a73829dbcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kkp5z" podUID="8dec4dde-a041-4dab-a0cc-b7a73829dbcb" May 17 00:50:49.745852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385677821.mount: Deactivated successfully. May 17 00:50:49.795304 env[1585]: time="2025-05-17T00:50:49.795243332Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:50.428945 env[1585]: time="2025-05-17T00:50:50.428888919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:50.435415 env[1585]: time="2025-05-17T00:50:50.435368078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:50.441104 env[1585]: time="2025-05-17T00:50:50.441056241Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:50.441804 env[1585]: time="2025-05-17T00:50:50.441772881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 00:50:50.460124 env[1585]: time="2025-05-17T00:50:50.460077540Z" level=info msg="CreateContainer within sandbox \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:50:50.504310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180594694.mount: Deactivated successfully. May 17 00:50:50.523452 env[1585]: time="2025-05-17T00:50:50.523392729Z" level=info msg="CreateContainer within sandbox \"81b094fb325ad0ba57db451213dee1030d64f85b9729b7a05c161d12413e05d6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d\"" May 17 00:50:50.525372 env[1585]: time="2025-05-17T00:50:50.525334540Z" level=info msg="StartContainer for \"75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d\"" May 17 00:50:50.584935 env[1585]: time="2025-05-17T00:50:50.584856181Z" level=info msg="StartContainer for \"75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d\" returns successfully" May 17 00:50:50.749676 kubelet[2655]: I0517 00:50:50.748514 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m4zf5" podStartSLOduration=0.852461036 podStartE2EDuration="18.748472215s" podCreationTimestamp="2025-05-17 00:50:32 +0000 UTC" firstStartedPulling="2025-05-17 00:50:32.546832242 +0000 UTC m=+22.278020945" lastFinishedPulling="2025-05-17 00:50:50.442843381 +0000 UTC m=+40.174032124" observedRunningTime="2025-05-17 00:50:50.742807731 +0000 UTC m=+40.473996474" watchObservedRunningTime="2025-05-17 00:50:50.748472215 +0000 UTC m=+40.479660958" May 17 00:50:50.787879 systemd[1]: run-containerd-runc-k8s.io-75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d-runc.gsX40S.mount: Deactivated successfully. May 17 00:50:50.906117 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:50:50.906258 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:50:51.044104 env[1585]: time="2025-05-17T00:50:51.043986750Z" level=info msg="StopPodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\"" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.137 [INFO][3897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.137 [INFO][3897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" iface="eth0" netns="/var/run/netns/cni-f2f4a93e-5af0-922f-db7c-c62d46c3e784" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.137 [INFO][3897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" iface="eth0" netns="/var/run/netns/cni-f2f4a93e-5af0-922f-db7c-c62d46c3e784" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.137 [INFO][3897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" iface="eth0" netns="/var/run/netns/cni-f2f4a93e-5af0-922f-db7c-c62d46c3e784" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.137 [INFO][3897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.137 [INFO][3897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.168 [INFO][3911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.169 [INFO][3911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.169 [INFO][3911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.180 [WARNING][3911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.180 [INFO][3911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.183 [INFO][3911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:51.188545 env[1585]: 2025-05-17 00:50:51.187 [INFO][3897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:50:51.191043 systemd[1]: run-netns-cni\x2df2f4a93e\x2d5af0\x2d922f\x2ddb7c\x2dc62d46c3e784.mount: Deactivated successfully. May 17 00:50:51.197184 env[1585]: time="2025-05-17T00:50:51.197098287Z" level=info msg="TearDown network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" successfully" May 17 00:50:51.197382 env[1585]: time="2025-05-17T00:50:51.197349833Z" level=info msg="StopPodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" returns successfully" May 17 00:50:51.297996 kubelet[2655]: I0517 00:50:51.297860 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8dp\" (UniqueName: \"kubernetes.io/projected/6298c7d9-10fe-4278-8b14-e26174630105-kube-api-access-5j8dp\") pod \"6298c7d9-10fe-4278-8b14-e26174630105\" (UID: \"6298c7d9-10fe-4278-8b14-e26174630105\") " May 17 00:50:51.297996 kubelet[2655]: I0517 00:50:51.297920 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6298c7d9-10fe-4278-8b14-e26174630105-whisker-ca-bundle\") pod \"6298c7d9-10fe-4278-8b14-e26174630105\" (UID: \"6298c7d9-10fe-4278-8b14-e26174630105\") " May 17 00:50:51.297996 kubelet[2655]: I0517 00:50:51.297960 2655 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6298c7d9-10fe-4278-8b14-e26174630105-whisker-backend-key-pair\") pod \"6298c7d9-10fe-4278-8b14-e26174630105\" (UID: \"6298c7d9-10fe-4278-8b14-e26174630105\") " May 17 00:50:51.299673 kubelet[2655]: I0517 00:50:51.299634 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6298c7d9-10fe-4278-8b14-e26174630105-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6298c7d9-10fe-4278-8b14-e26174630105" (UID: "6298c7d9-10fe-4278-8b14-e26174630105"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:50:51.330881 kubelet[2655]: I0517 00:50:51.330840 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6298c7d9-10fe-4278-8b14-e26174630105-kube-api-access-5j8dp" (OuterVolumeSpecName: "kube-api-access-5j8dp") pod "6298c7d9-10fe-4278-8b14-e26174630105" (UID: "6298c7d9-10fe-4278-8b14-e26174630105"). InnerVolumeSpecName "kube-api-access-5j8dp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:50:51.331918 kubelet[2655]: I0517 00:50:51.331878 2655 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6298c7d9-10fe-4278-8b14-e26174630105-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6298c7d9-10fe-4278-8b14-e26174630105" (UID: "6298c7d9-10fe-4278-8b14-e26174630105"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:50:51.398929 kubelet[2655]: I0517 00:50:51.398890 2655 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6298c7d9-10fe-4278-8b14-e26174630105-whisker-backend-key-pair\") on node \"ci-3510.3.7-n-e6f3637a46\" DevicePath \"\"" May 17 00:50:51.399127 kubelet[2655]: I0517 00:50:51.399112 2655 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8dp\" (UniqueName: \"kubernetes.io/projected/6298c7d9-10fe-4278-8b14-e26174630105-kube-api-access-5j8dp\") on node \"ci-3510.3.7-n-e6f3637a46\" DevicePath \"\"" May 17 00:50:51.399216 kubelet[2655]: I0517 00:50:51.399205 2655 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6298c7d9-10fe-4278-8b14-e26174630105-whisker-ca-bundle\") on node \"ci-3510.3.7-n-e6f3637a46\" DevicePath \"\"" May 17 00:50:51.745804 systemd[1]: var-lib-kubelet-pods-6298c7d9\x2d10fe\x2d4278\x2d8b14\x2de26174630105-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5j8dp.mount: Deactivated successfully. May 17 00:50:51.745960 systemd[1]: var-lib-kubelet-pods-6298c7d9\x2d10fe\x2d4278\x2d8b14\x2de26174630105-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:50:51.901915 kubelet[2655]: I0517 00:50:51.901870 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90b80adb-a2bc-44f9-b9c7-b7b81f9e9897-whisker-ca-bundle\") pod \"whisker-5f88f6fd44-x7zmt\" (UID: \"90b80adb-a2bc-44f9-b9c7-b7b81f9e9897\") " pod="calico-system/whisker-5f88f6fd44-x7zmt" May 17 00:50:51.902407 kubelet[2655]: I0517 00:50:51.902368 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90b80adb-a2bc-44f9-b9c7-b7b81f9e9897-whisker-backend-key-pair\") pod \"whisker-5f88f6fd44-x7zmt\" (UID: \"90b80adb-a2bc-44f9-b9c7-b7b81f9e9897\") " pod="calico-system/whisker-5f88f6fd44-x7zmt" May 17 00:50:51.902573 kubelet[2655]: I0517 00:50:51.902533 2655 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v85t\" (UniqueName: \"kubernetes.io/projected/90b80adb-a2bc-44f9-b9c7-b7b81f9e9897-kube-api-access-2v85t\") pod \"whisker-5f88f6fd44-x7zmt\" (UID: \"90b80adb-a2bc-44f9-b9c7-b7b81f9e9897\") " pod="calico-system/whisker-5f88f6fd44-x7zmt" May 17 00:50:52.107358 env[1585]: time="2025-05-17T00:50:52.106940884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f88f6fd44-x7zmt,Uid:90b80adb-a2bc-44f9-b9c7-b7b81f9e9897,Namespace:calico-system,Attempt:0,}" May 17 00:50:52.277972 systemd-networkd[1757]: cali55da5aaa5d2: Link UP May 17 00:50:52.290569 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:50:52.290706 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali55da5aaa5d2: link becomes ready May 17 00:50:52.291007 systemd-networkd[1757]: cali55da5aaa5d2: Gained carrier May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.170 [INFO][3954] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.184 [INFO][3954] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0 whisker-5f88f6fd44- calico-system 90b80adb-a2bc-44f9-b9c7-b7b81f9e9897 886 0 2025-05-17 00:50:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f88f6fd44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 whisker-5f88f6fd44-x7zmt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali55da5aaa5d2 [] [] }} ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.184 [INFO][3954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.209 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" HandleID="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.209 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" HandleID="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d70c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"whisker-5f88f6fd44-x7zmt", "timestamp":"2025-05-17 00:50:52.209571779 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.209 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.209 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.209 [INFO][3966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.219 [INFO][3966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.225 [INFO][3966] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.230 [INFO][3966] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.234 [INFO][3966] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.236 [INFO][3966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.236 [INFO][3966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.237 [INFO][3966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0 May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.245 [INFO][3966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.252 [INFO][3966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.1/26] block=192.168.75.0/26 handle="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.253 [INFO][3966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.1/26] handle="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.253 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:52.309746 env[1585]: 2025-05-17 00:50:52.253 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.1/26] IPv6=[] ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" HandleID="k8s-pod-network.c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.310399 env[1585]: 2025-05-17 00:50:52.255 [INFO][3954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0", GenerateName:"whisker-5f88f6fd44-", Namespace:"calico-system", SelfLink:"", UID:"90b80adb-a2bc-44f9-b9c7-b7b81f9e9897", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f88f6fd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"whisker-5f88f6fd44-x7zmt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali55da5aaa5d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:52.310399 env[1585]: 2025-05-17 00:50:52.255 [INFO][3954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.1/32] ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.310399 env[1585]: 2025-05-17 00:50:52.255 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55da5aaa5d2 ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.310399 env[1585]: 2025-05-17 00:50:52.292 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.310399 env[1585]: 2025-05-17 00:50:52.292 [INFO][3954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0", GenerateName:"whisker-5f88f6fd44-", Namespace:"calico-system", SelfLink:"", UID:"90b80adb-a2bc-44f9-b9c7-b7b81f9e9897", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f88f6fd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0", Pod:"whisker-5f88f6fd44-x7zmt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali55da5aaa5d2", MAC:"fe:e9:31:cc:35:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:52.310399 env[1585]: 2025-05-17 00:50:52.308 [INFO][3954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0" Namespace="calico-system" Pod="whisker-5f88f6fd44-x7zmt" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--5f88f6fd44--x7zmt-eth0" May 17 00:50:52.321881 env[1585]: time="2025-05-17T00:50:52.321793124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:52.322061 env[1585]: time="2025-05-17T00:50:52.321852001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:52.322061 env[1585]: time="2025-05-17T00:50:52.321863520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:52.322135 env[1585]: time="2025-05-17T00:50:52.322092588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0 pid=3988 runtime=io.containerd.runc.v2 May 17 00:50:52.363956 env[1585]: time="2025-05-17T00:50:52.363839246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f88f6fd44-x7zmt,Uid:90b80adb-a2bc-44f9-b9c7-b7b81f9e9897,Namespace:calico-system,Attempt:0,} returns sandbox id \"c765dc184c4b0cf40079c1d99c911d6bb4ee6c917db3bba1c284f5411eac3ea0\"" May 17 00:50:52.367523 env[1585]: time="2025-05-17T00:50:52.366919042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:50:52.408000 audit[4071]: AVC avc: denied { write } for pid=4071 comm="tee" name="fd" dev="proc" ino=25963 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.413503 kernel: kauditd_printk_skb: 2 callbacks suppressed May 17 00:50:52.413631 kernel: audit: type=1400 audit(1747443052.408:311): avc: denied { write } for pid=4071 comm="tee" name="fd" dev="proc" ino=25963 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.422000 audit[4074]: AVC avc: denied { write } for pid=4074 comm="tee" name="fd" dev="proc" ino=25067 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.453400 kernel: audit: type=1400 audit(1747443052.422:312): avc: denied { write } for pid=4074 comm="tee" name="fd" dev="proc" ino=25067 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.422000 audit[4074]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffff31d7d4 a2=241 a3=1b6 items=1 ppid=4033 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.504960 kernel: audit: type=1300 audit(1747443052.422:312): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffff31d7d4 a2=241 a3=1b6 items=1 ppid=4033 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.510683 kubelet[2655]: I0517 00:50:52.510647 2655 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6298c7d9-10fe-4278-8b14-e26174630105" path="/var/lib/kubelet/pods/6298c7d9-10fe-4278-8b14-e26174630105/volumes" May 17 00:50:52.422000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 00:50:52.521271 kernel: audit: type=1307 audit(1747443052.422:312): cwd="/etc/service/enabled/bird/log" May 17 00:50:52.521392 kernel: audit: type=1302 audit(1747443052.422:312): item=0 name="/dev/fd/63" inode=25054 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.422000 audit: PATH item=0 name="/dev/fd/63" inode=25054 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.554397 kernel: audit: type=1327 audit(1747443052.422:312): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.408000 audit[4071]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd92d67d3 a2=241 a3=1b6 items=1 ppid=4034 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.554926 env[1585]: time="2025-05-17T00:50:52.554704203Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:50:52.558413 env[1585]: time="2025-05-17T00:50:52.558358449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:50:52.558892 kubelet[2655]: E0517 00:50:52.558845 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:50:52.559030 kubelet[2655]: E0517 00:50:52.559014 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:50:52.559465 kubelet[2655]: E0517 00:50:52.559425 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c3d4b8a8cf3948f2b6deeb355e35bdc2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:50:52.561849 env[1585]: time="2025-05-17T00:50:52.561777667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:50:52.580754 kernel: audit: type=1300 audit(1747443052.408:311): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd92d67d3 a2=241 a3=1b6 items=1 ppid=4034 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.408000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 00:50:52.609825 kernel: audit: type=1307 audit(1747443052.408:311): cwd="/etc/service/enabled/confd/log" May 17 00:50:52.408000 audit: PATH item=0 name="/dev/fd/63" inode=25053 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.635096 kernel: audit: type=1302 audit(1747443052.408:311): item=0 name="/dev/fd/63" inode=25053 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.408000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.657303 kernel: audit: type=1327 audit(1747443052.408:311): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.490000 audit[4097]: AVC avc: denied { write } for pid=4097 comm="tee" name="fd" dev="proc" ino=25087 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.490000 audit[4097]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffea74a7c4 a2=241 a3=1b6 items=1 ppid=4040 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.490000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 00:50:52.490000 audit: PATH item=0 name="/dev/fd/63" inode=25967 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.490000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.498000 audit[4088]: AVC avc: denied { write } for pid=4088 comm="tee" name="fd" dev="proc" ino=25091 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.498000 audit[4088]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff40647d3 a2=241 a3=1b6 items=1 ppid=4042 pid=4088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.498000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 00:50:52.498000 audit: PATH item=0 name="/dev/fd/63" inode=25078 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.499000 audit[4093]: AVC avc: denied { write } for pid=4093 comm="tee" name="fd" dev="proc" ino=25095 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.499000 audit[4093]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdae7f7c3 a2=241 a3=1b6 items=1 ppid=4037 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.499000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 00:50:52.499000 audit: PATH item=0 name="/dev/fd/63" inode=25968 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.499000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.511000 audit[4091]: AVC avc: denied { write } for pid=4091 comm="tee" name="fd" dev="proc" ino=25099 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.511000 audit[4091]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff37ec7d5 a2=241 a3=1b6 items=1 ppid=4046 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.511000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 00:50:52.511000 audit: PATH item=0 name="/dev/fd/63" inode=25079 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.511000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.513000 audit[4095]: AVC avc: denied { write } for pid=4095 comm="tee" name="fd" dev="proc" ino=25103 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 00:50:52.513000 audit[4095]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcfeeb7d3 a2=241 a3=1b6 items=1 ppid=4031 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.513000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 00:50:52.513000 audit: PATH item=0 name="/dev/fd/63" inode=25966 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:52.513000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 00:50:52.749180 env[1585]: time="2025-05-17T00:50:52.749122412Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:50:52.752999 env[1585]: time="2025-05-17T00:50:52.752936729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:50:52.753400 kubelet[2655]: E0517 00:50:52.753357 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:50:52.753495 kubelet[2655]: E0517 00:50:52.753412 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:50:52.753601 kubelet[2655]: E0517 00:50:52.753536 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:50:52.754920 kubelet[2655]: E0517 00:50:52.754871 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:50:52.835000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.835000 audit: BPF prog-id=10 op=LOAD May 17 00:50:52.835000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8864e18 a2=98 a3=fffff8864e08 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.835000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.835000 audit: BPF prog-id=10 op=UNLOAD May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit: BPF prog-id=11 op=LOAD May 17 00:50:52.836000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff8864aa8 a2=74 a3=95 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.836000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.836000 audit: BPF prog-id=11 op=UNLOAD May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.836000 audit: BPF prog-id=12 op=LOAD May 17 00:50:52.836000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff8864b08 a2=94 a3=2 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.836000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.836000 audit: BPF prog-id=12 op=UNLOAD May 17 00:50:52.933000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.933000 audit: BPF prog-id=13 op=LOAD May 17 00:50:52.933000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff8864ac8 a2=40 a3=fffff8864af8 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.933000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.934000 audit: BPF prog-id=13 op=UNLOAD May 17 00:50:52.934000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.934000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=fffff8864be0 a2=50 a3=0 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.934000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.943000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.943000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff8864b38 a2=28 a3=fffff8864c68 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.943000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.943000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff8864b68 a2=28 a3=fffff8864c98 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.943000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.943000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff8864a18 a2=28 a3=fffff8864b48 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.943000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.943000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff8864b88 a2=28 a3=fffff8864cb8 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.943000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.943000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff8864b68 a2=28 a3=fffff8864c98 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.943000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.943000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff8864b58 a2=28 a3=fffff8864c88 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.943000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff8864b88 a2=28 a3=fffff8864cb8 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff8864b68 a2=28 a3=fffff8864c98 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff8864b88 a2=28 a3=fffff8864cb8 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff8864b58 a2=28 a3=fffff8864c88 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=fffff8864bd8 a2=28 a3=fffff8864d18 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff8864910 a2=50 a3=0 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit: BPF prog-id=14 op=LOAD May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff8864918 a2=94 a3=5 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit: BPF prog-id=14 op=UNLOAD May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=fffff8864a20 a2=50 a3=0 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=fffff8864b68 a2=4 a3=3 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { confidentiality } for pid=4136 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff8864b48 a2=94 a3=6 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { confidentiality } for pid=4136 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff8864318 a2=94 a3=83 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { perfmon } for pid=4136 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { bpf } for pid=4136 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.944000 audit[4136]: AVC avc: denied { confidentiality } for pid=4136 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:50:52.944000 audit[4136]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=fffff8864318 a2=94 a3=83 items=0 ppid=4043 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.944000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit: BPF prog-id=15 op=LOAD May 17 00:50:52.954000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffddf5c238 a2=98 a3=ffffddf5c228 items=0 ppid=4043 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:50:52.954000 audit: BPF prog-id=15 op=UNLOAD May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit: BPF prog-id=16 op=LOAD May 17 00:50:52.954000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffddf5c0e8 a2=74 a3=95 items=0 ppid=4043 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:50:52.954000 audit: BPF prog-id=16 op=UNLOAD May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { perfmon } for pid=4139 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit[4139]: AVC avc: denied { bpf } for pid=4139 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:52.954000 audit: BPF prog-id=17 op=LOAD May 17 00:50:52.954000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffddf5c118 a2=40 a3=ffffddf5c148 items=0 ppid=4043 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:52.954000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 00:50:52.954000 audit: BPF prog-id=17 op=UNLOAD May 17 00:50:53.067164 systemd-networkd[1757]: vxlan.calico: Link UP May 17 00:50:53.067170 systemd-networkd[1757]: vxlan.calico: Gained carrier May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit: BPF prog-id=18 op=LOAD May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeda1d828 a2=98 a3=ffffeda1d818 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit: BPF prog-id=18 op=UNLOAD May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit: BPF prog-id=19 op=LOAD May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeda1d508 a2=74 a3=95 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit: BPF prog-id=19 op=UNLOAD May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit: BPF prog-id=20 op=LOAD May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeda1d568 a2=94 a3=2 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit: BPF prog-id=20 op=UNLOAD May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffeda1d598 a2=28 a3=ffffeda1d6c8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeda1d5c8 a2=28 a3=ffffeda1d6f8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeda1d478 a2=28 a3=ffffeda1d5a8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffeda1d5e8 a2=28 a3=ffffeda1d718 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffeda1d5c8 a2=28 a3=ffffeda1d6f8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffeda1d5b8 a2=28 a3=ffffeda1d6e8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffeda1d5e8 a2=28 a3=ffffeda1d718 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeda1d5c8 a2=28 a3=ffffeda1d6f8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeda1d5e8 a2=28 a3=ffffeda1d718 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeda1d5b8 a2=28 a3=ffffeda1d6e8 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffeda1d638 a2=28 a3=ffffeda1d778 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.082000 audit: BPF prog-id=21 op=LOAD May 17 00:50:53.082000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeda1d458 a2=40 a3=ffffeda1d488 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.082000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.083000 audit: BPF prog-id=21 op=UNLOAD May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffeda1d480 a2=50 a3=0 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffeda1d480 a2=50 a3=0 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit: BPF prog-id=22 op=LOAD May 17 00:50:53.083000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeda1cbe8 a2=94 a3=2 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.083000 audit: BPF prog-id=22 op=UNLOAD May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { perfmon } for pid=4166 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit[4166]: AVC avc: denied { bpf } for pid=4166 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.083000 audit: BPF prog-id=23 op=LOAD May 17 00:50:53.083000 audit[4166]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeda1cd78 a2=94 a3=30 items=0 ppid=4043 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.083000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.086000 audit: BPF prog-id=24 op=LOAD May 17 00:50:53.086000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc06211b8 a2=98 a3=ffffc06211a8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.086000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.086000 audit: BPF prog-id=24 op=UNLOAD May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit: BPF prog-id=25 op=LOAD May 17 00:50:53.087000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0620e48 a2=74 a3=95 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.087000 audit: BPF prog-id=25 op=UNLOAD May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.087000 audit: BPF prog-id=26 op=LOAD May 17 00:50:53.087000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0620ea8 a2=94 a3=2 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.087000 audit: BPF prog-id=26 op=UNLOAD May 17 00:50:53.187000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit: BPF prog-id=27 op=LOAD May 17 00:50:53.187000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0620e68 a2=40 a3=ffffc0620e98 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.187000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.187000 audit: BPF prog-id=27 op=UNLOAD May 17 00:50:53.187000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.187000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc0620f80 a2=50 a3=0 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.187000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc0620ed8 a2=28 a3=ffffc0621008 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0620f08 a2=28 a3=ffffc0621038 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0620db8 a2=28 a3=ffffc0620ee8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc0620f28 a2=28 a3=ffffc0621058 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc0620f08 a2=28 a3=ffffc0621038 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc0620ef8 a2=28 a3=ffffc0621028 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc0620f28 a2=28 a3=ffffc0621058 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0620f08 a2=28 a3=ffffc0621038 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0620f28 a2=28 a3=ffffc0621058 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0620ef8 a2=28 a3=ffffc0621028 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc0620f78 a2=28 a3=ffffc06210b8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc0620cb0 a2=50 a3=0 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit: BPF prog-id=28 op=LOAD May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0620cb8 a2=94 a3=5 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit: BPF prog-id=28 op=UNLOAD May 17 00:50:53.196000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc0620dc0 a2=50 a3=0 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.196000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.196000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc0620f08 a2=4 a3=3 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.196000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { confidentiality } for pid=4168 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:50:53.197000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc0620ee8 a2=94 a3=6 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { confidentiality } for pid=4168 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:50:53.197000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc06206b8 a2=94 a3=83 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { confidentiality } for pid=4168 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 00:50:53.197000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc06206b8 a2=94 a3=83 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.197000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.197000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc06220f8 a2=10 a3=ffffc06221e8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.198000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.198000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc0621fb8 a2=10 a3=ffffc06220a8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.198000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.198000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc0621f28 a2=10 a3=ffffc06220a8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.198000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 00:50:53.198000 audit[4168]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc0621f28 a2=10 a3=ffffc06220a8 items=0 ppid=4043 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 00:50:53.206000 audit: BPF prog-id=23 op=UNLOAD May 17 00:50:53.298000 audit[4194]: NETFILTER_CFG table=mangle:108 family=2 entries=16 op=nft_register_chain pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:53.298000 audit[4194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc7e3f030 a2=0 a3=ffff9cb80fa8 items=0 ppid=4043 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.298000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:53.322000 audit[4197]: NETFILTER_CFG table=nat:109 family=2 entries=15 op=nft_register_chain pid=4197 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:53.322000 audit[4197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff97562a0 a2=0 a3=ffffac1d6fa8 items=0 ppid=4043 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.322000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:53.346000 audit[4193]: NETFILTER_CFG table=raw:110 family=2 entries=21 op=nft_register_chain pid=4193 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:53.346000 audit[4193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc68e7350 a2=0 a3=ffffa28d3fa8 items=0 ppid=4043 pid=4193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.346000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:53.357000 audit[4196]: NETFILTER_CFG table=filter:111 family=2 entries=94 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:53.357000 audit[4196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffd7c334c0 a2=0 a3=ffff97d24fa8 items=0 ppid=4043 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.357000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:53.447690 systemd-networkd[1757]: cali55da5aaa5d2: Gained IPv6LL May 17 00:50:53.727590 kubelet[2655]: E0517 00:50:53.727465 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:50:53.758000 audit[4211]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:53.758000 audit[4211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffce09e830 a2=0 a3=1 items=0 ppid=2769 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:53.764000 audit[4211]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:53.764000 audit[4211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffce09e830 a2=0 a3=1 items=0 ppid=2769 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:53.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:54.215592 systemd-networkd[1757]: vxlan.calico: Gained IPv6LL May 17 00:50:56.481206 env[1585]: time="2025-05-17T00:50:56.481160685Z" level=info msg="StopPodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\"" May 17 00:50:56.484039 env[1585]: time="2025-05-17T00:50:56.483663243Z" level=info msg="StopPodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\"" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.562 [INFO][4232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.562 [INFO][4232] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" iface="eth0" netns="/var/run/netns/cni-be9bf12a-77d1-518e-3337-03ff37f76e53" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.563 [INFO][4232] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" iface="eth0" netns="/var/run/netns/cni-be9bf12a-77d1-518e-3337-03ff37f76e53" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.563 [INFO][4232] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" iface="eth0" netns="/var/run/netns/cni-be9bf12a-77d1-518e-3337-03ff37f76e53" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.563 [INFO][4232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.563 [INFO][4232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.585 [INFO][4247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.586 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.586 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.595 [WARNING][4247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.595 [INFO][4247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.596 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:56.599411 env[1585]: 2025-05-17 00:50:56.598 [INFO][4232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:50:56.603901 systemd[1]: run-netns-cni\x2dbe9bf12a\x2d77d1\x2d518e\x2d3337\x2d03ff37f76e53.mount: Deactivated successfully. May 17 00:50:56.607071 env[1585]: time="2025-05-17T00:50:56.605304366Z" level=info msg="TearDown network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" successfully" May 17 00:50:56.607071 env[1585]: time="2025-05-17T00:50:56.605343684Z" level=info msg="StopPodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" returns successfully" May 17 00:50:56.607463 env[1585]: time="2025-05-17T00:50:56.607434623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-gf7hk,Uid:a468b308-fb40-4092-92f2-ce4832332fb4,Namespace:calico-apiserver,Attempt:1,}" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.557 [INFO][4233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.558 [INFO][4233] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" iface="eth0" netns="/var/run/netns/cni-de6f38ea-1b9a-f65b-35aa-4b829d1263bd" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.558 [INFO][4233] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" iface="eth0" netns="/var/run/netns/cni-de6f38ea-1b9a-f65b-35aa-4b829d1263bd" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.560 [INFO][4233] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" iface="eth0" netns="/var/run/netns/cni-de6f38ea-1b9a-f65b-35aa-4b829d1263bd" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.560 [INFO][4233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.560 [INFO][4233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.599 [INFO][4251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.599 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.599 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.610 [WARNING][4251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.611 [INFO][4251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.613 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:56.616273 env[1585]: 2025-05-17 00:50:56.614 [INFO][4233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:50:56.619609 systemd[1]: run-netns-cni\x2dde6f38ea\x2d1b9a\x2df65b\x2d35aa\x2d4b829d1263bd.mount: Deactivated successfully. May 17 00:50:56.620776 env[1585]: time="2025-05-17T00:50:56.620743135Z" level=info msg="TearDown network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" successfully" May 17 00:50:56.620870 env[1585]: time="2025-05-17T00:50:56.620855410Z" level=info msg="StopPodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" returns successfully" May 17 00:50:56.621780 env[1585]: time="2025-05-17T00:50:56.621749326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kkp5z,Uid:8dec4dde-a041-4dab-a0cc-b7a73829dbcb,Namespace:kube-system,Attempt:1,}" May 17 00:50:56.815579 systemd-networkd[1757]: cali7af66c21218: Link UP May 17 00:50:56.830721 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:50:56.830848 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7af66c21218: link becomes ready May 17 00:50:56.831464 systemd-networkd[1757]: cali7af66c21218: Gained carrier May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.731 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0 coredns-7c65d6cfc9- kube-system 8dec4dde-a041-4dab-a0cc-b7a73829dbcb 916 0 2025-05-17 00:50:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 coredns-7c65d6cfc9-kkp5z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7af66c21218 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.731 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.760 [INFO][4286] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" HandleID="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.760 [INFO][4286] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" HandleID="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022f690), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"coredns-7c65d6cfc9-kkp5z", "timestamp":"2025-05-17 00:50:56.760101997 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.760 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.760 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.760 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.772 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.776 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.781 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.783 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.787 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.787 [INFO][4286] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.789 [INFO][4286] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.794 [INFO][4286] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.803 [INFO][4286] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.2/26] block=192.168.75.0/26 handle="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.804 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.2/26] handle="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.804 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:56.854242 env[1585]: 2025-05-17 00:50:56.804 [INFO][4286] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.2/26] IPv6=[] ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" HandleID="k8s-pod-network.b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.854918 env[1585]: 2025-05-17 00:50:56.808 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8dec4dde-a041-4dab-a0cc-b7a73829dbcb", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"coredns-7c65d6cfc9-kkp5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7af66c21218", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:56.854918 env[1585]: 2025-05-17 00:50:56.808 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.2/32] ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.854918 env[1585]: 2025-05-17 00:50:56.808 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7af66c21218 ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.854918 env[1585]: 2025-05-17 00:50:56.832 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.854918 env[1585]: 2025-05-17 00:50:56.832 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8dec4dde-a041-4dab-a0cc-b7a73829dbcb", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b", Pod:"coredns-7c65d6cfc9-kkp5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7af66c21218", MAC:"56:28:af:41:60:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:56.854918 env[1585]: 2025-05-17 00:50:56.851 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kkp5z" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:50:56.859000 audit[4309]: NETFILTER_CFG table=filter:114 family=2 entries=42 op=nft_register_chain pid=4309 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:56.859000 audit[4309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffd1dc73b0 a2=0 a3=ffffb5165fa8 items=0 ppid=4043 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:56.859000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:56.870259 env[1585]: time="2025-05-17T00:50:56.870058128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:56.870259 env[1585]: time="2025-05-17T00:50:56.870095687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:56.870259 env[1585]: time="2025-05-17T00:50:56.870106006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:56.871318 env[1585]: time="2025-05-17T00:50:56.870564024Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b pid=4318 runtime=io.containerd.runc.v2 May 17 00:50:56.956671 systemd-networkd[1757]: cali182b58ffca2: Link UP May 17 00:50:56.972527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali182b58ffca2: link becomes ready May 17 00:50:56.970647 systemd-networkd[1757]: cali182b58ffca2: Gained carrier May 17 00:50:56.974606 env[1585]: time="2025-05-17T00:50:56.974556166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kkp5z,Uid:8dec4dde-a041-4dab-a0cc-b7a73829dbcb,Namespace:kube-system,Attempt:1,} returns sandbox id \"b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b\"" May 17 00:50:56.978756 env[1585]: time="2025-05-17T00:50:56.978708524Z" level=info msg="CreateContainer within sandbox \"b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.728 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0 calico-apiserver-76f68fbb46- calico-apiserver a468b308-fb40-4092-92f2-ce4832332fb4 917 0 2025-05-17 00:50:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76f68fbb46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 calico-apiserver-76f68fbb46-gf7hk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali182b58ffca2 [] [] }} ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.728 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.769 [INFO][4284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" HandleID="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.770 [INFO][4284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" HandleID="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003214d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"calico-apiserver-76f68fbb46-gf7hk", "timestamp":"2025-05-17 00:50:56.769949958 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.770 [INFO][4284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.804 [INFO][4284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.804 [INFO][4284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.881 [INFO][4284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.889 [INFO][4284] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.895 [INFO][4284] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.900 [INFO][4284] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.903 [INFO][4284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.903 [INFO][4284] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.905 [INFO][4284] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.911 [INFO][4284] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.935 [INFO][4284] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.3/26] block=192.168.75.0/26 handle="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.935 [INFO][4284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.3/26] handle="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.935 [INFO][4284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:56.996071 env[1585]: 2025-05-17 00:50:56.935 [INFO][4284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.3/26] IPv6=[] ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" HandleID="k8s-pod-network.f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.996693 env[1585]: 2025-05-17 00:50:56.942 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"a468b308-fb40-4092-92f2-ce4832332fb4", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"calico-apiserver-76f68fbb46-gf7hk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali182b58ffca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:56.996693 env[1585]: 2025-05-17 00:50:56.943 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.3/32] ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.996693 env[1585]: 2025-05-17 00:50:56.943 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali182b58ffca2 ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.996693 env[1585]: 2025-05-17 00:50:56.971 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:56.996693 env[1585]: 2025-05-17 00:50:56.974 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"a468b308-fb40-4092-92f2-ce4832332fb4", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b", Pod:"calico-apiserver-76f68fbb46-gf7hk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali182b58ffca2", MAC:"ae:ef:fd:a3:28:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:56.996693 env[1585]: 2025-05-17 00:50:56.989 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-gf7hk" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:50:57.008000 audit[4364]: NETFILTER_CFG table=filter:115 family=2 entries=54 op=nft_register_chain pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:57.008000 audit[4364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffce598560 a2=0 a3=ffff82b36fa8 items=0 ppid=4043 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:57.034467 env[1585]: time="2025-05-17T00:50:57.034378492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:57.034467 env[1585]: time="2025-05-17T00:50:57.034496926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:57.034467 env[1585]: time="2025-05-17T00:50:57.034525325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:57.034876 env[1585]: time="2025-05-17T00:50:57.034800432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b pid=4372 runtime=io.containerd.runc.v2 May 17 00:50:57.035071 env[1585]: time="2025-05-17T00:50:57.035030941Z" level=info msg="CreateContainer within sandbox \"b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"912a55f65ae88eecec4147d20252c322d30d2125651cfe5d15af70bbd744a1eb\"" May 17 00:50:57.037194 env[1585]: time="2025-05-17T00:50:57.037159199Z" level=info msg="StartContainer for \"912a55f65ae88eecec4147d20252c322d30d2125651cfe5d15af70bbd744a1eb\"" May 17 00:50:57.107532 env[1585]: time="2025-05-17T00:50:57.106966838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-gf7hk,Uid:a468b308-fb40-4092-92f2-ce4832332fb4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b\"" May 17 00:50:57.110728 env[1585]: time="2025-05-17T00:50:57.109900739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:50:57.115216 env[1585]: time="2025-05-17T00:50:57.115176008Z" level=info msg="StartContainer for \"912a55f65ae88eecec4147d20252c322d30d2125651cfe5d15af70bbd744a1eb\" returns successfully" May 17 00:50:57.482352 env[1585]: time="2025-05-17T00:50:57.482222064Z" level=info msg="StopPodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\"" May 17 00:50:57.483183 env[1585]: time="2025-05-17T00:50:57.483139060Z" level=info msg="StopPodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\"" May 17 00:50:57.483604 env[1585]: time="2025-05-17T00:50:57.483582759Z" level=info msg="StopPodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\"" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.568 [INFO][4469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.568 [INFO][4469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" iface="eth0" netns="/var/run/netns/cni-3e79af5d-b510-b93f-2201-f07cc8463410" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.569 [INFO][4469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" iface="eth0" netns="/var/run/netns/cni-3e79af5d-b510-b93f-2201-f07cc8463410" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.570 [INFO][4469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" iface="eth0" netns="/var/run/netns/cni-3e79af5d-b510-b93f-2201-f07cc8463410" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.570 [INFO][4469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.570 [INFO][4469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.623 [INFO][4487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.623 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.623 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.633 [WARNING][4487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.633 [INFO][4487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.635 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:57.645102 env[1585]: 2025-05-17 00:50:57.643 [INFO][4469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:50:57.648196 systemd[1]: run-netns-cni\x2d3e79af5d\x2db510\x2db93f\x2d2201\x2df07cc8463410.mount: Deactivated successfully. May 17 00:50:57.648834 env[1585]: time="2025-05-17T00:50:57.648792219Z" level=info msg="TearDown network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" successfully" May 17 00:50:57.648929 env[1585]: time="2025-05-17T00:50:57.648913333Z" level=info msg="StopPodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" returns successfully" May 17 00:50:57.650413 env[1585]: time="2025-05-17T00:50:57.650379143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7vz96,Uid:91ca7308-a9d6-4eb9-b8c4-045158a71d72,Namespace:calico-system,Attempt:1,}" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.591 [INFO][4461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.591 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" iface="eth0" netns="/var/run/netns/cni-d36698ee-8fdb-4be7-75ac-553d50d57a41" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.591 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" iface="eth0" netns="/var/run/netns/cni-d36698ee-8fdb-4be7-75ac-553d50d57a41" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.592 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" iface="eth0" netns="/var/run/netns/cni-d36698ee-8fdb-4be7-75ac-553d50d57a41" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.592 [INFO][4461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.592 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.628 [INFO][4496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.628 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.635 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.660 [WARNING][4496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.660 [INFO][4496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.662 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:57.671626 env[1585]: 2025-05-17 00:50:57.669 [INFO][4461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:50:57.674248 systemd[1]: run-netns-cni\x2dd36698ee\x2d8fdb\x2d4be7\x2d75ac\x2d553d50d57a41.mount: Deactivated successfully. May 17 00:50:57.678472 env[1585]: time="2025-05-17T00:50:57.676012604Z" level=info msg="TearDown network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" successfully" May 17 00:50:57.678472 env[1585]: time="2025-05-17T00:50:57.676056922Z" level=info msg="StopPodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" returns successfully" May 17 00:50:57.679278 env[1585]: time="2025-05-17T00:50:57.679235090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ksxs,Uid:1d05e6a1-f683-492a-9425-3efff5a40b11,Namespace:calico-system,Attempt:1,}" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.586 [INFO][4470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.586 [INFO][4470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" iface="eth0" netns="/var/run/netns/cni-a42a83d0-c343-bb19-ca4e-801b140be526" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.587 [INFO][4470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" iface="eth0" netns="/var/run/netns/cni-a42a83d0-c343-bb19-ca4e-801b140be526" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.587 [INFO][4470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" iface="eth0" netns="/var/run/netns/cni-a42a83d0-c343-bb19-ca4e-801b140be526" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.587 [INFO][4470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.587 [INFO][4470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.644 [INFO][4494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.644 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.662 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.673 [WARNING][4494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.674 [INFO][4494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.675 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:57.681066 env[1585]: 2025-05-17 00:50:57.678 [INFO][4470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:50:57.684161 systemd[1]: run-netns-cni\x2da42a83d0\x2dc343\x2dbb19\x2dca4e\x2d801b140be526.mount: Deactivated successfully. May 17 00:50:57.686139 env[1585]: time="2025-05-17T00:50:57.686095884Z" level=info msg="TearDown network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" successfully" May 17 00:50:57.686309 env[1585]: time="2025-05-17T00:50:57.686288955Z" level=info msg="StopPodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" returns successfully" May 17 00:50:57.687224 env[1585]: time="2025-05-17T00:50:57.687189152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9b8l5,Uid:7f1c698e-516f-4fe7-9f6f-d4affc1db4e0,Namespace:kube-system,Attempt:1,}" May 17 00:50:57.756510 kubelet[2655]: I0517 00:50:57.756411 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kkp5z" podStartSLOduration=41.756393499 podStartE2EDuration="41.756393499s" podCreationTimestamp="2025-05-17 00:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:57.755841366 +0000 UTC m=+47.487030109" watchObservedRunningTime="2025-05-17 00:50:57.756393499 +0000 UTC m=+47.487582242" May 17 00:50:57.770000 audit[4509]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.779142 kernel: kauditd_printk_skb: 517 callbacks suppressed May 17 00:50:57.779260 kernel: audit: type=1325 audit(1747443057.770:417): table=filter:116 family=2 entries=20 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.770000 audit[4509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffea02c030 a2=0 a3=1 items=0 ppid=2769 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.826512 kernel: audit: type=1300 audit(1747443057.770:417): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffea02c030 a2=0 a3=1 items=0 ppid=2769 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:57.842190 kernel: audit: type=1327 audit(1747443057.770:417): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:57.800000 audit[4509]: NETFILTER_CFG table=nat:117 family=2 entries=14 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.856872 kernel: audit: type=1325 audit(1747443057.800:418): table=nat:117 family=2 entries=14 op=nft_register_rule pid=4509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.800000 audit[4509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffea02c030 a2=0 a3=1 items=0 ppid=2769 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.884159 kernel: audit: type=1300 audit(1747443057.800:418): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffea02c030 a2=0 a3=1 items=0 ppid=2769 pid=4509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:57.900005 kernel: audit: type=1327 audit(1747443057.800:418): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:57.884000 audit[4526]: NETFILTER_CFG table=filter:118 family=2 entries=17 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.915752 kernel: audit: type=1325 audit(1747443057.884:419): table=filter:118 family=2 entries=17 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.943445 kernel: audit: type=1300 audit(1747443057.884:419): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd0794a0 a2=0 a3=1 items=0 ppid=2769 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.884000 audit[4526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd0794a0 a2=0 a3=1 items=0 ppid=2769 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.884000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:57.960221 kernel: audit: type=1327 audit(1747443057.884:419): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:57.901000 audit[4526]: NETFILTER_CFG table=nat:119 family=2 entries=35 op=nft_register_chain pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.979842 kernel: audit: type=1325 audit(1747443057.901:420): table=nat:119 family=2 entries=35 op=nft_register_chain pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:57.901000 audit[4526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdd0794a0 a2=0 a3=1 items=0 ppid=2769 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:57.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:58.108702 systemd-networkd[1757]: cali253efcd4292: Link UP May 17 00:50:58.122147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:50:58.122279 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali253efcd4292: link becomes ready May 17 00:50:58.124326 systemd-networkd[1757]: cali253efcd4292: Gained carrier May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:57.992 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0 coredns-7c65d6cfc9- kube-system 7f1c698e-516f-4fe7-9f6f-d4affc1db4e0 933 0 2025-05-17 00:50:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 coredns-7c65d6cfc9-9b8l5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali253efcd4292 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:57.992 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.060 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" HandleID="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.060 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" HandleID="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003214d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"coredns-7c65d6cfc9-9b8l5", "timestamp":"2025-05-17 00:50:58.060266143 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.060 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.060 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.060 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.069 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.074 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.077 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.079 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.081 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.081 [INFO][4551] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.082 [INFO][4551] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5 May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.090 [INFO][4551] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.096 [INFO][4551] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.4/26] block=192.168.75.0/26 handle="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.096 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.4/26] handle="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.097 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:58.141645 env[1585]: 2025-05-17 00:50:58.097 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.4/26] IPv6=[] ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" HandleID="k8s-pod-network.ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.142253 env[1585]: 2025-05-17 00:50:58.102 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"coredns-7c65d6cfc9-9b8l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali253efcd4292", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:58.142253 env[1585]: 2025-05-17 00:50:58.102 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.4/32] ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.142253 env[1585]: 2025-05-17 00:50:58.102 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali253efcd4292 ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.142253 env[1585]: 2025-05-17 00:50:58.124 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.142253 env[1585]: 2025-05-17 00:50:58.124 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5", Pod:"coredns-7c65d6cfc9-9b8l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali253efcd4292", MAC:"96:02:70:e8:ab:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:58.142253 env[1585]: 2025-05-17 00:50:58.140 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9b8l5" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:50:58.155000 audit[4581]: NETFILTER_CFG table=filter:120 family=2 entries=40 op=nft_register_chain pid=4581 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:58.155000 audit[4581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20344 a0=3 a1=fffffefe1f90 a2=0 a3=ffff96edafa8 items=0 ppid=4043 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.155000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:58.222627 env[1585]: time="2025-05-17T00:50:58.220655997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:58.222880 systemd-networkd[1757]: cali083bb67f754: Link UP May 17 00:50:58.223588 env[1585]: time="2025-05-17T00:50:58.223476186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:58.223669 env[1585]: time="2025-05-17T00:50:58.223600100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:58.223937 env[1585]: time="2025-05-17T00:50:58.223896086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5 pid=4591 runtime=io.containerd.runc.v2 May 17 00:50:58.232232 systemd-networkd[1757]: cali083bb67f754: Gained carrier May 17 00:50:58.232664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali083bb67f754: link becomes ready May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:57.990 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0 goldmane-8f77d7b6c- calico-system 91ca7308-a9d6-4eb9-b8c4-045158a71d72 932 0 2025-05-17 00:50:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 goldmane-8f77d7b6c-7vz96 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali083bb67f754 [] [] }} ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:57.990 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.060 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" HandleID="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.061 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" HandleID="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"goldmane-8f77d7b6c-7vz96", "timestamp":"2025-05-17 00:50:58.059801525 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.062 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.097 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.097 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.170 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.176 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.180 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.182 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.184 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.184 [INFO][4555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.186 [INFO][4555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0 May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.191 [INFO][4555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.201 [INFO][4555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.5/26] block=192.168.75.0/26 handle="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.202 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.5/26] handle="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.202 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:58.266203 env[1585]: 2025-05-17 00:50:58.202 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.5/26] IPv6=[] ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" HandleID="k8s-pod-network.0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.266872 env[1585]: 2025-05-17 00:50:58.211 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"91ca7308-a9d6-4eb9-b8c4-045158a71d72", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"goldmane-8f77d7b6c-7vz96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali083bb67f754", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:58.266872 env[1585]: 2025-05-17 00:50:58.211 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.5/32] ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.266872 env[1585]: 2025-05-17 00:50:58.211 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali083bb67f754 ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.266872 env[1585]: 2025-05-17 00:50:58.233 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.266872 env[1585]: 2025-05-17 00:50:58.233 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"91ca7308-a9d6-4eb9-b8c4-045158a71d72", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0", Pod:"goldmane-8f77d7b6c-7vz96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali083bb67f754", MAC:"62:e5:2c:b1:56:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:58.266872 env[1585]: 2025-05-17 00:50:58.255 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0" Namespace="calico-system" Pod="goldmane-8f77d7b6c-7vz96" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:50:58.291000 audit[4629]: NETFILTER_CFG table=filter:121 family=2 entries=56 op=nft_register_chain pid=4629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:58.291000 audit[4629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28744 a0=3 a1=ffffce707970 a2=0 a3=ffff97662fa8 items=0 ppid=4043 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:58.319956 env[1585]: time="2025-05-17T00:50:58.319918536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9b8l5,Uid:7f1c698e-516f-4fe7-9f6f-d4affc1db4e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5\"" May 17 00:50:58.335387 env[1585]: time="2025-05-17T00:50:58.335336259Z" level=info msg="CreateContainer within sandbox \"ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:50:58.336000 env[1585]: time="2025-05-17T00:50:58.335925791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:58.336141 env[1585]: time="2025-05-17T00:50:58.336103943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:58.336242 env[1585]: time="2025-05-17T00:50:58.336221297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:58.339948 env[1585]: time="2025-05-17T00:50:58.338718781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0 pid=4645 runtime=io.containerd.runc.v2 May 17 00:50:58.365636 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6b824d24d4d: link becomes ready May 17 00:50:58.373124 systemd-networkd[1757]: cali6b824d24d4d: Link UP May 17 00:50:58.373388 systemd-networkd[1757]: cali6b824d24d4d: Gained carrier May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.029 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0 csi-node-driver- calico-system 1d05e6a1-f683-492a-9425-3efff5a40b11 934 0 2025-05-17 00:50:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 csi-node-driver-8ksxs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6b824d24d4d [] [] }} ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.029 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.065 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" HandleID="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.065 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" HandleID="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a7020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"csi-node-driver-8ksxs", "timestamp":"2025-05-17 00:50:58.065836084 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.066 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.202 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.202 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.278 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.295 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.309 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.311 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.313 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.313 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.316 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.328 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.344 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.6/26] block=192.168.75.0/26 handle="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.344 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.6/26] handle="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.344 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:58.384251 env[1585]: 2025-05-17 00:50:58.344 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.6/26] IPv6=[] ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" HandleID="k8s-pod-network.309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.384889 env[1585]: 2025-05-17 00:50:58.348 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d05e6a1-f683-492a-9425-3efff5a40b11", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"csi-node-driver-8ksxs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b824d24d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:58.384889 env[1585]: 2025-05-17 00:50:58.348 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.6/32] ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.384889 env[1585]: 2025-05-17 00:50:58.348 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b824d24d4d ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.384889 env[1585]: 2025-05-17 00:50:58.367 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.384889 env[1585]: 2025-05-17 00:50:58.367 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d05e6a1-f683-492a-9425-3efff5a40b11", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c", Pod:"csi-node-driver-8ksxs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b824d24d4d", MAC:"c6:99:cd:b7:ea:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:58.384889 env[1585]: 2025-05-17 00:50:58.380 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c" Namespace="calico-system" Pod="csi-node-driver-8ksxs" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:50:58.400598 env[1585]: time="2025-05-17T00:50:58.400555583Z" level=info msg="CreateContainer within sandbox \"ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db8095bb5caf92809b54a79f278d5eb872fd80283d74c1a38c4890792fe8e95a\"" May 17 00:50:58.403950 env[1585]: time="2025-05-17T00:50:58.403888188Z" level=info msg="StartContainer for \"db8095bb5caf92809b54a79f278d5eb872fd80283d74c1a38c4890792fe8e95a\"" May 17 00:50:58.405000 audit[4686]: NETFILTER_CFG table=filter:122 family=2 entries=52 op=nft_register_chain pid=4686 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:58.405000 audit[4686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24328 a0=3 a1=fffffa550ce0 a2=0 a3=ffff88cb9fa8 items=0 ppid=4043 pid=4686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.405000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:58.456025 env[1585]: time="2025-05-17T00:50:58.455985442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-7vz96,Uid:91ca7308-a9d6-4eb9-b8c4-045158a71d72,Namespace:calico-system,Attempt:1,} returns sandbox id \"0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0\"" May 17 00:50:58.457800 env[1585]: time="2025-05-17T00:50:58.457595368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:58.457800 env[1585]: time="2025-05-17T00:50:58.457630846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:58.457800 env[1585]: time="2025-05-17T00:50:58.457641005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:58.458028 env[1585]: time="2025-05-17T00:50:58.457848476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c pid=4705 runtime=io.containerd.runc.v2 May 17 00:50:58.494270 env[1585]: time="2025-05-17T00:50:58.490792742Z" level=info msg="StopPodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\"" May 17 00:50:58.496272 env[1585]: time="2025-05-17T00:50:58.496226769Z" level=info msg="StopPodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\"" May 17 00:50:58.503820 systemd-networkd[1757]: cali7af66c21218: Gained IPv6LL May 17 00:50:58.607851 env[1585]: time="2025-05-17T00:50:58.607802575Z" level=info msg="StartContainer for \"db8095bb5caf92809b54a79f278d5eb872fd80283d74c1a38c4890792fe8e95a\" returns successfully" May 17 00:50:58.641460 env[1585]: time="2025-05-17T00:50:58.639705450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8ksxs,Uid:1d05e6a1-f683-492a-9425-3efff5a40b11,Namespace:calico-system,Attempt:1,} returns sandbox id \"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c\"" May 17 00:50:58.807536 kubelet[2655]: I0517 00:50:58.807123 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9b8l5" podStartSLOduration=42.807074779 podStartE2EDuration="42.807074779s" podCreationTimestamp="2025-05-17 00:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:58.786760645 +0000 UTC m=+48.517949388" watchObservedRunningTime="2025-05-17 00:50:58.807074779 +0000 UTC m=+48.538263522" May 17 00:50:58.824000 audit[4818]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:58.824000 audit[4818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffb4cfff0 a2=0 a3=1 items=0 ppid=2769 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.697 [INFO][4791] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.733 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" iface="eth0" netns="/var/run/netns/cni-34d81acf-2437-bd93-48e3-53c6aca50103" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.733 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" iface="eth0" netns="/var/run/netns/cni-34d81acf-2437-bd93-48e3-53c6aca50103" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.734 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" iface="eth0" netns="/var/run/netns/cni-34d81acf-2437-bd93-48e3-53c6aca50103" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.734 [INFO][4791] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.734 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.798 [INFO][4807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.799 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.799 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.824 [WARNING][4807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.825 [INFO][4807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.826 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:58.834320 env[1585]: 2025-05-17 00:50:58.828 [INFO][4791] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:50:58.835000 audit[4818]: NETFILTER_CFG table=nat:124 family=2 entries=44 op=nft_register_rule pid=4818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:58.835000 audit[4818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffffb4cfff0 a2=0 a3=1 items=0 ppid=2769 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:58.837509 systemd[1]: run-netns-cni\x2d34d81acf\x2d2437\x2dbd93\x2d48e3\x2d53c6aca50103.mount: Deactivated successfully. May 17 00:50:58.839548 env[1585]: time="2025-05-17T00:50:58.839502910Z" level=info msg="TearDown network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" successfully" May 17 00:50:58.839640 env[1585]: time="2025-05-17T00:50:58.839623744Z" level=info msg="StopPodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" returns successfully" May 17 00:50:58.843869 env[1585]: time="2025-05-17T00:50:58.843837028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-wvb9m,Uid:627944b6-ba25-4900-b922-e4dd80aebc0f,Namespace:calico-apiserver,Attempt:1,}" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.680 [INFO][4787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.732 [INFO][4787] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" iface="eth0" netns="/var/run/netns/cni-08cf97e8-2362-e234-35e8-fa56052d4eea" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.732 [INFO][4787] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" iface="eth0" netns="/var/run/netns/cni-08cf97e8-2362-e234-35e8-fa56052d4eea" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.732 [INFO][4787] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" iface="eth0" netns="/var/run/netns/cni-08cf97e8-2362-e234-35e8-fa56052d4eea" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.732 [INFO][4787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.732 [INFO][4787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.802 [INFO][4805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.802 [INFO][4805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.827 [INFO][4805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.840 [WARNING][4805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.840 [INFO][4805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.843 [INFO][4805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:58.849378 env[1585]: 2025-05-17 00:50:58.848 [INFO][4787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:50:58.853019 env[1585]: time="2025-05-17T00:50:58.851995768Z" level=info msg="TearDown network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" successfully" May 17 00:50:58.853019 env[1585]: time="2025-05-17T00:50:58.852028127Z" level=info msg="StopPodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" returns successfully" May 17 00:50:58.852005 systemd[1]: run-netns-cni\x2d08cf97e8\x2d2362\x2de234\x2d35e8\x2dfa56052d4eea.mount: Deactivated successfully. May 17 00:50:58.863749 env[1585]: time="2025-05-17T00:50:58.863711263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545d5864b5-tt9qd,Uid:bc9691d8-0209-4d9d-befa-8a2f81686ebd,Namespace:calico-system,Attempt:1,}" May 17 00:50:58.887978 systemd-networkd[1757]: cali182b58ffca2: Gained IPv6LL May 17 00:50:59.071382 systemd-networkd[1757]: cali32329d7cf7a: Link UP May 17 00:50:59.078632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali32329d7cf7a: link becomes ready May 17 00:50:59.075787 systemd-networkd[1757]: cali32329d7cf7a: Gained carrier May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:58.959 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0 calico-apiserver-76f68fbb46- calico-apiserver 627944b6-ba25-4900-b922-e4dd80aebc0f 963 0 2025-05-17 00:50:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76f68fbb46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 calico-apiserver-76f68fbb46-wvb9m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32329d7cf7a [] [] }} ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:58.959 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.018 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" HandleID="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.018 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" HandleID="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"calico-apiserver-76f68fbb46-wvb9m", "timestamp":"2025-05-17 00:50:59.016412331 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.018 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.018 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.018 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.028 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.033 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.039 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.041 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.043 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.043 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.046 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.051 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.063 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.7/26] block=192.168.75.0/26 handle="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.064 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.7/26] handle="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.064 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:59.127924 env[1585]: 2025-05-17 00:50:59.064 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.7/26] IPv6=[] ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" HandleID="k8s-pod-network.bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.128583 env[1585]: 2025-05-17 00:50:59.065 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"627944b6-ba25-4900-b922-e4dd80aebc0f", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"calico-apiserver-76f68fbb46-wvb9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32329d7cf7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:59.128583 env[1585]: 2025-05-17 00:50:59.065 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.7/32] ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.128583 env[1585]: 2025-05-17 00:50:59.065 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32329d7cf7a ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.128583 env[1585]: 2025-05-17 00:50:59.080 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.128583 env[1585]: 2025-05-17 00:50:59.081 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"627944b6-ba25-4900-b922-e4dd80aebc0f", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c", Pod:"calico-apiserver-76f68fbb46-wvb9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32329d7cf7a", MAC:"9e:b3:b7:82:5b:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:59.128583 env[1585]: 2025-05-17 00:50:59.112 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c" Namespace="calico-apiserver" Pod="calico-apiserver-76f68fbb46-wvb9m" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:50:59.136000 audit[4863]: NETFILTER_CFG table=filter:125 family=2 entries=57 op=nft_register_chain pid=4863 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:59.136000 audit[4863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27828 a0=3 a1=ffffe0ddf2d0 a2=0 a3=ffff8d6c9fa8 items=0 ppid=4043 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:59.136000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:59.188710 systemd-networkd[1757]: cali4d264aac148: Link UP May 17 00:50:59.189126 env[1585]: time="2025-05-17T00:50:59.189054586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:59.189338 env[1585]: time="2025-05-17T00:50:59.189304815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:59.189469 env[1585]: time="2025-05-17T00:50:59.189442449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:59.201134 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:50:59.201274 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4d264aac148: link becomes ready May 17 00:50:59.201731 systemd-networkd[1757]: cali4d264aac148: Gained carrier May 17 00:50:59.206682 env[1585]: time="2025-05-17T00:50:59.205849901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c pid=4874 runtime=io.containerd.runc.v2 May 17 00:50:59.208777 systemd-networkd[1757]: cali253efcd4292: Gained IPv6LL May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:58.986 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0 calico-kube-controllers-545d5864b5- calico-system bc9691d8-0209-4d9d-befa-8a2f81686ebd 962 0 2025-05-17 00:50:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:545d5864b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.7-n-e6f3637a46 calico-kube-controllers-545d5864b5-tt9qd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4d264aac148 [] [] }} ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:58.986 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.044 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" HandleID="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.045 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" HandleID="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003214c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-e6f3637a46", "pod":"calico-kube-controllers-545d5864b5-tt9qd", "timestamp":"2025-05-17 00:50:59.044373337 +0000 UTC"}, Hostname:"ci-3510.3.7-n-e6f3637a46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.045 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.064 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.064 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-e6f3637a46' May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.129 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.134 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.147 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.150 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.153 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.153 [INFO][4851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.155 [INFO][4851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9 May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.161 [INFO][4851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.173 [INFO][4851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.75.8/26] block=192.168.75.0/26 handle="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.173 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.75.8/26] handle="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" host="ci-3510.3.7-n-e6f3637a46" May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.173 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:50:59.218523 env[1585]: 2025-05-17 00:50:59.173 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.8/26] IPv6=[] ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" HandleID="k8s-pod-network.59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.219101 env[1585]: 2025-05-17 00:50:59.175 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0", GenerateName:"calico-kube-controllers-545d5864b5-", Namespace:"calico-system", SelfLink:"", UID:"bc9691d8-0209-4d9d-befa-8a2f81686ebd", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545d5864b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"", Pod:"calico-kube-controllers-545d5864b5-tt9qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d264aac148", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:59.219101 env[1585]: 2025-05-17 00:50:59.175 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.8/32] ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.219101 env[1585]: 2025-05-17 00:50:59.175 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d264aac148 ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.219101 env[1585]: 2025-05-17 00:50:59.202 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.219101 env[1585]: 2025-05-17 00:50:59.202 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0", GenerateName:"calico-kube-controllers-545d5864b5-", Namespace:"calico-system", SelfLink:"", UID:"bc9691d8-0209-4d9d-befa-8a2f81686ebd", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545d5864b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9", Pod:"calico-kube-controllers-545d5864b5-tt9qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d264aac148", MAC:"6a:a4:86:15:f4:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:50:59.219101 env[1585]: 2025-05-17 00:50:59.217 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9" Namespace="calico-system" Pod="calico-kube-controllers-545d5864b5-tt9qd" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:50:59.258000 audit[4913]: NETFILTER_CFG table=filter:126 family=2 entries=60 op=nft_register_chain pid=4913 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 00:50:59.258000 audit[4913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26704 a0=3 a1=fffff5f6a930 a2=0 a3=ffff90bd0fa8 items=0 ppid=4043 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:59.258000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 00:50:59.269165 env[1585]: time="2025-05-17T00:50:59.269114939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:59.269533 env[1585]: time="2025-05-17T00:50:59.269478403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:59.269643 env[1585]: time="2025-05-17T00:50:59.269621876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:59.270093 env[1585]: time="2025-05-17T00:50:59.269877385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9 pid=4914 runtime=io.containerd.runc.v2 May 17 00:50:59.288949 env[1585]: time="2025-05-17T00:50:59.288907678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76f68fbb46-wvb9m,Uid:627944b6-ba25-4900-b922-e4dd80aebc0f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c\"" May 17 00:50:59.353554 env[1585]: time="2025-05-17T00:50:59.353477096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545d5864b5-tt9qd,Uid:bc9691d8-0209-4d9d-befa-8a2f81686ebd,Namespace:calico-system,Attempt:1,} returns sandbox id \"59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9\"" May 17 00:50:59.464039 systemd-networkd[1757]: cali083bb67f754: Gained IPv6LL May 17 00:50:59.870000 audit[4959]: NETFILTER_CFG table=filter:127 family=2 entries=14 op=nft_register_rule pid=4959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:59.870000 audit[4959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe17e87e0 a2=0 a3=1 items=0 ppid=2769 pid=4959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:59.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:50:59.880000 audit[4959]: NETFILTER_CFG table=nat:128 family=2 entries=56 op=nft_register_chain pid=4959 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:50:59.880000 audit[4959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe17e87e0 a2=0 a3=1 items=0 ppid=2769 pid=4959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:59.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:00.167741 systemd-networkd[1757]: cali6b824d24d4d: Gained IPv6LL May 17 00:51:00.377761 env[1585]: time="2025-05-17T00:51:00.377718084Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:00.385004 env[1585]: time="2025-05-17T00:51:00.384965441Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:00.388134 env[1585]: time="2025-05-17T00:51:00.388097061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:00.393668 env[1585]: time="2025-05-17T00:51:00.393636414Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:00.394078 env[1585]: time="2025-05-17T00:51:00.394053836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:51:00.397971 env[1585]: time="2025-05-17T00:51:00.397947182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:51:00.399040 env[1585]: time="2025-05-17T00:51:00.399013015Z" level=info msg="CreateContainer within sandbox \"f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:51:00.445456 env[1585]: time="2025-05-17T00:51:00.445350868Z" level=info msg="CreateContainer within sandbox \"f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"735b3b3463c8781bc5aafdfe37fdf446efbae9d189643c53711add8a14c17fd5\"" May 17 00:51:00.447771 env[1585]: time="2025-05-17T00:51:00.446646931Z" level=info msg="StartContainer for \"735b3b3463c8781bc5aafdfe37fdf446efbae9d189643c53711add8a14c17fd5\"" May 17 00:51:00.518350 env[1585]: time="2025-05-17T00:51:00.518297696Z" level=info msg="StartContainer for \"735b3b3463c8781bc5aafdfe37fdf446efbae9d189643c53711add8a14c17fd5\" returns successfully" May 17 00:51:00.576890 env[1585]: time="2025-05-17T00:51:00.576830846Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:00.580814 env[1585]: time="2025-05-17T00:51:00.580767791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:00.581212 kubelet[2655]: E0517 00:51:00.580997 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:51:00.581212 kubelet[2655]: E0517 00:51:00.581046 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:51:00.582198 kubelet[2655]: E0517 00:51:00.582128 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:00.582347 env[1585]: time="2025-05-17T00:51:00.582296042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:51:00.584864 kubelet[2655]: E0517 00:51:00.584228 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:51:00.603738 systemd[1]: run-containerd-runc-k8s.io-735b3b3463c8781bc5aafdfe37fdf446efbae9d189643c53711add8a14c17fd5-runc.u2JnZn.mount: Deactivated successfully. May 17 00:51:00.743624 systemd-networkd[1757]: cali32329d7cf7a: Gained IPv6LL May 17 00:51:00.784703 kubelet[2655]: E0517 00:51:00.784510 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:51:00.813464 kubelet[2655]: I0517 00:51:00.813397 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76f68fbb46-gf7hk" podStartSLOduration=28.527468362 podStartE2EDuration="31.81336702s" podCreationTimestamp="2025-05-17 00:50:29 +0000 UTC" firstStartedPulling="2025-05-17 00:50:57.109533476 +0000 UTC m=+46.840722179" lastFinishedPulling="2025-05-17 00:51:00.395432054 +0000 UTC m=+50.126620837" observedRunningTime="2025-05-17 00:51:00.797450569 +0000 UTC m=+50.528639312" watchObservedRunningTime="2025-05-17 00:51:00.81336702 +0000 UTC m=+50.544555763" May 17 00:51:00.824000 audit[5003]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:00.824000 audit[5003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff8fd0e80 a2=0 a3=1 items=0 ppid=2769 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:00.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:00.829000 audit[5003]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:00.829000 audit[5003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff8fd0e80 a2=0 a3=1 items=0 ppid=2769 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:00.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:01.127693 systemd-networkd[1757]: cali4d264aac148: Gained IPv6LL May 17 00:51:01.843000 audit[5005]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=5005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:01.843000 audit[5005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc58b5780 a2=0 a3=1 items=0 ppid=2769 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:01.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:01.847000 audit[5005]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:01.847000 audit[5005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc58b5780 a2=0 a3=1 items=0 ppid=2769 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:01.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:01.964567 env[1585]: time="2025-05-17T00:51:01.964523554Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:01.972796 env[1585]: time="2025-05-17T00:51:01.972756874Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:01.979159 env[1585]: time="2025-05-17T00:51:01.979117397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:01.984877 env[1585]: time="2025-05-17T00:51:01.984837827Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:01.985682 env[1585]: time="2025-05-17T00:51:01.985655391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 00:51:01.987576 env[1585]: time="2025-05-17T00:51:01.987548589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:51:01.988982 env[1585]: time="2025-05-17T00:51:01.988925248Z" level=info msg="CreateContainer within sandbox \"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:51:02.025833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640144451.mount: Deactivated successfully. May 17 00:51:02.044699 env[1585]: time="2025-05-17T00:51:02.044643935Z" level=info msg="CreateContainer within sandbox \"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9916f5be4e31751b85fa4bc679c95d7e36ddc41a8fe37018793150f469170bf8\"" May 17 00:51:02.045655 env[1585]: time="2025-05-17T00:51:02.045615894Z" level=info msg="StartContainer for \"9916f5be4e31751b85fa4bc679c95d7e36ddc41a8fe37018793150f469170bf8\"" May 17 00:51:02.121564 env[1585]: time="2025-05-17T00:51:02.120767961Z" level=info msg="StartContainer for \"9916f5be4e31751b85fa4bc679c95d7e36ddc41a8fe37018793150f469170bf8\" returns successfully" May 17 00:51:02.312246 env[1585]: time="2025-05-17T00:51:02.312192459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:02.321452 env[1585]: time="2025-05-17T00:51:02.321403385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:02.326156 env[1585]: time="2025-05-17T00:51:02.326100944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:02.333476 env[1585]: time="2025-05-17T00:51:02.333425991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:02.334086 env[1585]: time="2025-05-17T00:51:02.334042165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 00:51:02.336284 env[1585]: time="2025-05-17T00:51:02.336233711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:51:02.338082 env[1585]: time="2025-05-17T00:51:02.338041314Z" level=info msg="CreateContainer within sandbox \"bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:51:02.382180 env[1585]: time="2025-05-17T00:51:02.382076112Z" level=info msg="CreateContainer within sandbox \"bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0762c31df389e1fba29350086e3e34f631930487ee3dd86d61d03a79b717ba11\"" May 17 00:51:02.382833 env[1585]: time="2025-05-17T00:51:02.382809320Z" level=info msg="StartContainer for \"0762c31df389e1fba29350086e3e34f631930487ee3dd86d61d03a79b717ba11\"" May 17 00:51:02.456330 env[1585]: time="2025-05-17T00:51:02.456275500Z" level=info msg="StartContainer for \"0762c31df389e1fba29350086e3e34f631930487ee3dd86d61d03a79b717ba11\" returns successfully" May 17 00:51:02.835000 audit[5078]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:02.842301 kernel: kauditd_printk_skb: 41 callbacks suppressed May 17 00:51:02.842437 kernel: audit: type=1325 audit(1747443062.835:434): table=filter:133 family=2 entries=14 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:02.835000 audit[5078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe0ef1360 a2=0 a3=1 items=0 ppid=2769 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:02.884984 kernel: audit: type=1300 audit(1747443062.835:434): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe0ef1360 a2=0 a3=1 items=0 ppid=2769 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:02.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:02.899608 kernel: audit: type=1327 audit(1747443062.835:434): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:02.900000 audit[5078]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:02.900000 audit[5078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe0ef1360 a2=0 a3=1 items=0 ppid=2769 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:02.942762 kernel: audit: type=1325 audit(1747443062.900:435): table=nat:134 family=2 entries=20 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:02.942888 kernel: audit: type=1300 audit(1747443062.900:435): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe0ef1360 a2=0 a3=1 items=0 ppid=2769 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:02.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:02.957932 kernel: audit: type=1327 audit(1747443062.900:435): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:03.148004 systemd[1]: run-containerd-runc-k8s.io-75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d-runc.OgYbLj.mount: Deactivated successfully. May 17 00:51:03.297994 kubelet[2655]: I0517 00:51:03.297930 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76f68fbb46-wvb9m" podStartSLOduration=31.257218931 podStartE2EDuration="34.297910464s" podCreationTimestamp="2025-05-17 00:50:29 +0000 UTC" firstStartedPulling="2025-05-17 00:50:59.294625617 +0000 UTC m=+49.025814320" lastFinishedPulling="2025-05-17 00:51:02.33531711 +0000 UTC m=+52.066505853" observedRunningTime="2025-05-17 00:51:02.807363652 +0000 UTC m=+52.538552395" watchObservedRunningTime="2025-05-17 00:51:03.297910464 +0000 UTC m=+53.029099207" May 17 00:51:03.791152 kubelet[2655]: I0517 00:51:03.790737 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:51:03.995000 audit[5106]: NETFILTER_CFG table=filter:135 family=2 entries=13 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:04.014510 kernel: audit: type=1325 audit(1747443063.995:436): table=filter:135 family=2 entries=13 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:03.995000 audit[5106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffed2409e0 a2=0 a3=1 items=0 ppid=2769 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:03.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:04.097450 kernel: audit: type=1300 audit(1747443063.995:436): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffed2409e0 a2=0 a3=1 items=0 ppid=2769 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:04.097583 kernel: audit: type=1327 audit(1747443063.995:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:04.082000 audit[5106]: NETFILTER_CFG table=nat:136 family=2 entries=27 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:04.112260 kernel: audit: type=1325 audit(1747443064.082:437): table=nat:136 family=2 entries=27 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:04.082000 audit[5106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffed2409e0 a2=0 a3=1 items=0 ppid=2769 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:04.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:04.622103 env[1585]: time="2025-05-17T00:51:04.622061313Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:04.630123 env[1585]: time="2025-05-17T00:51:04.630086024Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:04.634846 env[1585]: time="2025-05-17T00:51:04.634808470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:04.642225 env[1585]: time="2025-05-17T00:51:04.642190887Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:04.642987 env[1585]: time="2025-05-17T00:51:04.642959616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 00:51:04.646752 env[1585]: time="2025-05-17T00:51:04.646722941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:51:04.664801 env[1585]: time="2025-05-17T00:51:04.664761561Z" level=info msg="CreateContainer within sandbox \"59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:51:04.696995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62806073.mount: Deactivated successfully. May 17 00:51:04.722360 env[1585]: time="2025-05-17T00:51:04.722312241Z" level=info msg="CreateContainer within sandbox \"59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db\"" May 17 00:51:04.724059 env[1585]: time="2025-05-17T00:51:04.723448354Z" level=info msg="StartContainer for \"2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db\"" May 17 00:51:04.802053 env[1585]: time="2025-05-17T00:51:04.800295602Z" level=info msg="StartContainer for \"2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db\" returns successfully" May 17 00:51:05.857111 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.ToEaxA.mount: Deactivated successfully. May 17 00:51:05.927307 kubelet[2655]: I0517 00:51:05.926940 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-545d5864b5-tt9qd" podStartSLOduration=28.639063514 podStartE2EDuration="33.926920713s" podCreationTimestamp="2025-05-17 00:50:32 +0000 UTC" firstStartedPulling="2025-05-17 00:50:59.356392444 +0000 UTC m=+49.087581147" lastFinishedPulling="2025-05-17 00:51:04.644249603 +0000 UTC m=+54.375438346" observedRunningTime="2025-05-17 00:51:05.823295878 +0000 UTC m=+55.554484621" watchObservedRunningTime="2025-05-17 00:51:05.926920713 +0000 UTC m=+55.658109456" May 17 00:51:06.537296 env[1585]: time="2025-05-17T00:51:06.537244970Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:06.545416 env[1585]: time="2025-05-17T00:51:06.545378650Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:06.549725 env[1585]: time="2025-05-17T00:51:06.549692280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:06.554445 env[1585]: time="2025-05-17T00:51:06.554417094Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:06.555048 env[1585]: time="2025-05-17T00:51:06.555018910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 00:51:06.566669 env[1585]: time="2025-05-17T00:51:06.566623053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:51:06.567182 env[1585]: time="2025-05-17T00:51:06.567149632Z" level=info msg="CreateContainer within sandbox \"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:51:06.620519 env[1585]: time="2025-05-17T00:51:06.620431893Z" level=info msg="CreateContainer within sandbox \"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"16574ef004452cfe7c1bf560f20d4746ddb83b30ed593767f07c7734cd3c6045\"" May 17 00:51:06.622758 env[1585]: time="2025-05-17T00:51:06.622720243Z" level=info msg="StartContainer for \"16574ef004452cfe7c1bf560f20d4746ddb83b30ed593767f07c7734cd3c6045\"" May 17 00:51:06.655255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161888417.mount: Deactivated successfully. May 17 00:51:06.684772 env[1585]: time="2025-05-17T00:51:06.684732240Z" level=info msg="StartContainer for \"16574ef004452cfe7c1bf560f20d4746ddb83b30ed593767f07c7734cd3c6045\" returns successfully" May 17 00:51:06.702828 kubelet[2655]: I0517 00:51:06.702792 2655 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:51:06.702828 kubelet[2655]: I0517 00:51:06.702828 2655 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:51:06.774878 env[1585]: time="2025-05-17T00:51:06.774819531Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:06.778451 env[1585]: time="2025-05-17T00:51:06.778397390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:06.778822 kubelet[2655]: E0517 00:51:06.778791 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:51:06.778961 kubelet[2655]: E0517 00:51:06.778942 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:51:06.779172 kubelet[2655]: E0517 00:51:06.779134 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c3d4b8a8cf3948f2b6deeb355e35bdc2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:06.781613 env[1585]: time="2025-05-17T00:51:06.781581704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:51:06.954248 env[1585]: time="2025-05-17T00:51:06.954105028Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:06.958062 env[1585]: time="2025-05-17T00:51:06.958002354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:06.958310 kubelet[2655]: E0517 00:51:06.958271 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:51:06.958698 kubelet[2655]: E0517 00:51:06.958680 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:51:06.959261 kubelet[2655]: E0517 00:51:06.958880 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:06.960661 kubelet[2655]: E0517 00:51:06.960596 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:51:10.652865 env[1585]: time="2025-05-17T00:51:10.652820061Z" level=info msg="StopPodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\"" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.731 [WARNING][5230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d05e6a1-f683-492a-9425-3efff5a40b11", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c", Pod:"csi-node-driver-8ksxs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b824d24d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.731 [INFO][5230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.731 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" iface="eth0" netns="" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.731 [INFO][5230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.731 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.777 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.777 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.777 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.820 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.821 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.835 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:10.839660 env[1585]: 2025-05-17 00:51:10.838 [INFO][5230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.840107 env[1585]: time="2025-05-17T00:51:10.839686371Z" level=info msg="TearDown network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" successfully" May 17 00:51:10.840107 env[1585]: time="2025-05-17T00:51:10.839716410Z" level=info msg="StopPodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" returns successfully" May 17 00:51:10.840592 env[1585]: time="2025-05-17T00:51:10.840560979Z" level=info msg="RemovePodSandbox for \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\"" May 17 00:51:10.840667 env[1585]: time="2025-05-17T00:51:10.840597658Z" level=info msg="Forcibly stopping sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\"" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.875 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1d05e6a1-f683-492a-9425-3efff5a40b11", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"309626b9d26aac3c0e30b48388b066328a87e0ede5da82a4d4689cf36463cd1c", Pod:"csi-node-driver-8ksxs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b824d24d4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.876 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.876 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" iface="eth0" netns="" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.876 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.876 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.898 [INFO][5258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.898 [INFO][5258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.898 [INFO][5258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.906 [WARNING][5258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.907 [INFO][5258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" HandleID="k8s-pod-network.7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" Workload="ci--3510.3.7--n--e6f3637a46-k8s-csi--node--driver--8ksxs-eth0" May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.908 [INFO][5258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:10.914233 env[1585]: 2025-05-17 00:51:10.909 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78" May 17 00:51:10.914233 env[1585]: time="2025-05-17T00:51:10.911429636Z" level=info msg="TearDown network for sandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" successfully" May 17 00:51:10.922300 env[1585]: time="2025-05-17T00:51:10.922259602Z" level=info msg="RemovePodSandbox \"7641d899fab9abfd96944fc097f8cbb77627151d13d557131ae8093e2c582d78\" returns successfully" May 17 00:51:10.924021 env[1585]: time="2025-05-17T00:51:10.923996138Z" level=info msg="StopPodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\"" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.968 [WARNING][5273] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.969 [INFO][5273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.969 [INFO][5273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" iface="eth0" netns="" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.969 [INFO][5273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.969 [INFO][5273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.992 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.992 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:10.992 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:11.002 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:11.002 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:11.004 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.007338 env[1585]: 2025-05-17 00:51:11.006 [INFO][5273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.007792 env[1585]: time="2025-05-17T00:51:11.007369184Z" level=info msg="TearDown network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" successfully" May 17 00:51:11.007792 env[1585]: time="2025-05-17T00:51:11.007421542Z" level=info msg="StopPodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" returns successfully" May 17 00:51:11.007903 env[1585]: time="2025-05-17T00:51:11.007856127Z" level=info msg="RemovePodSandbox for \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\"" May 17 00:51:11.007954 env[1585]: time="2025-05-17T00:51:11.007910205Z" level=info msg="Forcibly stopping sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\"" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.046 [WARNING][5295] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" WorkloadEndpoint="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.046 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.046 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" iface="eth0" netns="" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.046 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.046 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.074 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.074 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.074 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.082 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.082 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" HandleID="k8s-pod-network.6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-whisker--75d6776cf8--sl7xk-eth0" May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.084 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.086842 env[1585]: 2025-05-17 00:51:11.085 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b" May 17 00:51:11.087228 env[1585]: time="2025-05-17T00:51:11.086875021Z" level=info msg="TearDown network for sandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" successfully" May 17 00:51:11.096960 env[1585]: time="2025-05-17T00:51:11.096896622Z" level=info msg="RemovePodSandbox \"6208978a085a46a3cbe3594b3d8360e95e2a1c80d4d840a32aac5feeb280f51b\" returns successfully" May 17 00:51:11.097402 env[1585]: time="2025-05-17T00:51:11.097370125Z" level=info msg="StopPodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\"" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.145 [WARNING][5316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"627944b6-ba25-4900-b922-e4dd80aebc0f", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c", Pod:"calico-apiserver-76f68fbb46-wvb9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32329d7cf7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.145 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.146 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" iface="eth0" netns="" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.146 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.146 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.176 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.177 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.177 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.184 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.185 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.187 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.191244 env[1585]: 2025-05-17 00:51:11.188 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.191244 env[1585]: time="2025-05-17T00:51:11.189820899Z" level=info msg="TearDown network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" successfully" May 17 00:51:11.191244 env[1585]: time="2025-05-17T00:51:11.189851898Z" level=info msg="StopPodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" returns successfully" May 17 00:51:11.191244 env[1585]: time="2025-05-17T00:51:11.190282362Z" level=info msg="RemovePodSandbox for \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\"" May 17 00:51:11.191244 env[1585]: time="2025-05-17T00:51:11.190314041Z" level=info msg="Forcibly stopping sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\"" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.242 [WARNING][5338] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"627944b6-ba25-4900-b922-e4dd80aebc0f", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"bf830e647d4d80fffd0e3c4aeda79eba43736ec2a9bb3f5d0f0ceceeb6bb343c", Pod:"calico-apiserver-76f68fbb46-wvb9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32329d7cf7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.242 [INFO][5338] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.242 [INFO][5338] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" iface="eth0" netns="" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.242 [INFO][5338] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.242 [INFO][5338] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.275 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.275 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.275 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.297 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.297 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" HandleID="k8s-pod-network.a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--wvb9m-eth0" May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.305 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.307671 env[1585]: 2025-05-17 00:51:11.306 [INFO][5338] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c" May 17 00:51:11.308134 env[1585]: time="2025-05-17T00:51:11.307700203Z" level=info msg="TearDown network for sandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" successfully" May 17 00:51:11.316851 env[1585]: time="2025-05-17T00:51:11.316800198Z" level=info msg="RemovePodSandbox \"a1ff3c36ad191fe9f00c504db7fd8ef88dfede488ac4a4bcb6b7fd60ed6a470c\" returns successfully" May 17 00:51:11.317358 env[1585]: time="2025-05-17T00:51:11.317337618Z" level=info msg="StopPodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\"" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.364 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"91ca7308-a9d6-4eb9-b8c4-045158a71d72", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0", Pod:"goldmane-8f77d7b6c-7vz96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali083bb67f754", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.364 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.364 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" iface="eth0" netns="" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.364 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.364 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.391 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.391 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.391 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.404 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.404 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.406 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.409664 env[1585]: 2025-05-17 00:51:11.407 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.410225 env[1585]: time="2025-05-17T00:51:11.410175858Z" level=info msg="TearDown network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" successfully" May 17 00:51:11.410310 env[1585]: time="2025-05-17T00:51:11.410290974Z" level=info msg="StopPodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" returns successfully" May 17 00:51:11.410937 env[1585]: time="2025-05-17T00:51:11.410906912Z" level=info msg="RemovePodSandbox for \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\"" May 17 00:51:11.411078 env[1585]: time="2025-05-17T00:51:11.411040987Z" level=info msg="Forcibly stopping sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\"" May 17 00:51:11.484312 env[1585]: time="2025-05-17T00:51:11.482583789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:51:11.511744 kubelet[2655]: I0517 00:51:11.511241 2655 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8ksxs" podStartSLOduration=31.591342676 podStartE2EDuration="39.511224564s" podCreationTimestamp="2025-05-17 00:50:32 +0000 UTC" firstStartedPulling="2025-05-17 00:50:58.64077864 +0000 UTC m=+48.371967383" lastFinishedPulling="2025-05-17 00:51:06.560660528 +0000 UTC m=+56.291849271" observedRunningTime="2025-05-17 00:51:06.833352665 +0000 UTC m=+56.564541408" watchObservedRunningTime="2025-05-17 00:51:11.511224564 +0000 UTC m=+61.242413307" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.466 [WARNING][5383] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"91ca7308-a9d6-4eb9-b8c4-045158a71d72", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"0438d8dbb907bbb039ad442193894e19962dae970bd2783461f9c510cc2ac7c0", Pod:"goldmane-8f77d7b6c-7vz96", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali083bb67f754", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.466 [INFO][5383] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.466 [INFO][5383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" iface="eth0" netns="" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.466 [INFO][5383] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.466 [INFO][5383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.505 [INFO][5390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.505 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.505 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.522 [WARNING][5390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.522 [INFO][5390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" HandleID="k8s-pod-network.18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" Workload="ci--3510.3.7--n--e6f3637a46-k8s-goldmane--8f77d7b6c--7vz96-eth0" May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.525 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.531574 env[1585]: 2025-05-17 00:51:11.527 [INFO][5383] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6" May 17 00:51:11.532105 env[1585]: time="2025-05-17T00:51:11.532053019Z" level=info msg="TearDown network for sandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" successfully" May 17 00:51:11.540097 env[1585]: time="2025-05-17T00:51:11.540050133Z" level=info msg="RemovePodSandbox \"18ed60873af1b6d842c3f72d71297cf9c1daa68634fbe71f49773320d27710f6\" returns successfully" May 17 00:51:11.540779 env[1585]: time="2025-05-17T00:51:11.540754268Z" level=info msg="StopPodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\"" May 17 00:51:11.662817 env[1585]: time="2025-05-17T00:51:11.662754065Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:11.670078 env[1585]: time="2025-05-17T00:51:11.669992566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:11.670979 kubelet[2655]: E0517 00:51:11.670401 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:51:11.670979 kubelet[2655]: E0517 00:51:11.670453 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:51:11.670979 kubelet[2655]: E0517 00:51:11.670594 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:11.674295 kubelet[2655]: E0517 00:51:11.674193 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.615 [WARNING][5404] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0", GenerateName:"calico-kube-controllers-545d5864b5-", Namespace:"calico-system", SelfLink:"", UID:"bc9691d8-0209-4d9d-befa-8a2f81686ebd", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545d5864b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9", Pod:"calico-kube-controllers-545d5864b5-tt9qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d264aac148", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.615 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.615 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" iface="eth0" netns="" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.615 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.615 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.652 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.652 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.652 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.668 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.668 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.676 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.679478 env[1585]: 2025-05-17 00:51:11.678 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.679987 env[1585]: time="2025-05-17T00:51:11.679938531Z" level=info msg="TearDown network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" successfully" May 17 00:51:11.680063 env[1585]: time="2025-05-17T00:51:11.680047687Z" level=info msg="StopPodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" returns successfully" May 17 00:51:11.680596 env[1585]: time="2025-05-17T00:51:11.680571068Z" level=info msg="RemovePodSandbox for \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\"" May 17 00:51:11.680783 env[1585]: time="2025-05-17T00:51:11.680738262Z" level=info msg="Forcibly stopping sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\"" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.744 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0", GenerateName:"calico-kube-controllers-545d5864b5-", Namespace:"calico-system", SelfLink:"", UID:"bc9691d8-0209-4d9d-befa-8a2f81686ebd", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545d5864b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"59f197f9a87255fc508eaff75df07256bbeeeb9edbfb554a2b75b7ca9e5cc8c9", Pod:"calico-kube-controllers-545d5864b5-tt9qd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d264aac148", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.744 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.744 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" iface="eth0" netns="" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.744 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.744 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.768 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.768 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.769 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.780 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.780 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" HandleID="k8s-pod-network.6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--kube--controllers--545d5864b5--tt9qd-eth0" May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.781 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.784453 env[1585]: 2025-05-17 00:51:11.783 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0" May 17 00:51:11.785060 env[1585]: time="2025-05-17T00:51:11.785011773Z" level=info msg="TearDown network for sandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" successfully" May 17 00:51:11.801478 env[1585]: time="2025-05-17T00:51:11.801431266Z" level=info msg="RemovePodSandbox \"6fced4ee67a2222bc51b88b5b247d76bab1a388c9af6cb96e34309a1141164f0\" returns successfully" May 17 00:51:11.802117 env[1585]: time="2025-05-17T00:51:11.802090682Z" level=info msg="StopPodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\"" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.850 [WARNING][5448] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8dec4dde-a041-4dab-a0cc-b7a73829dbcb", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b", Pod:"coredns-7c65d6cfc9-kkp5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7af66c21218", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.851 [INFO][5448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.851 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" iface="eth0" netns="" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.851 [INFO][5448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.851 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.870 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.870 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.870 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.880 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.880 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.881 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.887122 env[1585]: 2025-05-17 00:51:11.885 [INFO][5448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.887847 env[1585]: time="2025-05-17T00:51:11.887815656Z" level=info msg="TearDown network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" successfully" May 17 00:51:11.887967 env[1585]: time="2025-05-17T00:51:11.887932812Z" level=info msg="StopPodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" returns successfully" May 17 00:51:11.888540 env[1585]: time="2025-05-17T00:51:11.888516871Z" level=info msg="RemovePodSandbox for \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\"" May 17 00:51:11.888831 env[1585]: time="2025-05-17T00:51:11.888766542Z" level=info msg="Forcibly stopping sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\"" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.920 [WARNING][5469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8dec4dde-a041-4dab-a0cc-b7a73829dbcb", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"b43c2d76d3873f2597c55fa24f574bde66e81c85c557b62425ed6189cb3fa44b", Pod:"coredns-7c65d6cfc9-kkp5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7af66c21218", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.920 [INFO][5469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.920 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" iface="eth0" netns="" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.920 [INFO][5469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.921 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.969 [INFO][5476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.969 [INFO][5476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.969 [INFO][5476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.981 [WARNING][5476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.981 [INFO][5476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" HandleID="k8s-pod-network.83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--kkp5z-eth0" May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.982 [INFO][5476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:11.985531 env[1585]: 2025-05-17 00:51:11.983 [INFO][5469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772" May 17 00:51:11.986087 env[1585]: time="2025-05-17T00:51:11.986041583Z" level=info msg="TearDown network for sandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" successfully" May 17 00:51:12.000051 env[1585]: time="2025-05-17T00:51:12.000004244Z" level=info msg="RemovePodSandbox \"83f59f3ba3ef754df6f4972180d8b7ab2613af6cb0f952f6c34e93b9d2f01772\" returns successfully" May 17 00:51:12.000698 env[1585]: time="2025-05-17T00:51:12.000675980Z" level=info msg="StopPodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\"" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.044 [WARNING][5493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"a468b308-fb40-4092-92f2-ce4832332fb4", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b", Pod:"calico-apiserver-76f68fbb46-gf7hk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali182b58ffca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.045 [INFO][5493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.045 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" iface="eth0" netns="" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.045 [INFO][5493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.045 [INFO][5493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.070 [INFO][5501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.070 [INFO][5501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.070 [INFO][5501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.082 [WARNING][5501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.082 [INFO][5501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.084 [INFO][5501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:12.087534 env[1585]: 2025-05-17 00:51:12.085 [INFO][5493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.088072 env[1585]: time="2025-05-17T00:51:12.088029153Z" level=info msg="TearDown network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" successfully" May 17 00:51:12.088146 env[1585]: time="2025-05-17T00:51:12.088130589Z" level=info msg="StopPodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" returns successfully" May 17 00:51:12.089019 env[1585]: time="2025-05-17T00:51:12.088990359Z" level=info msg="RemovePodSandbox for \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\"" May 17 00:51:12.089117 env[1585]: time="2025-05-17T00:51:12.089025598Z" level=info msg="Forcibly stopping sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\"" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.132 [WARNING][5516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0", GenerateName:"calico-apiserver-76f68fbb46-", Namespace:"calico-apiserver", SelfLink:"", UID:"a468b308-fb40-4092-92f2-ce4832332fb4", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76f68fbb46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"f57d9b7557e69ff209bc2f33e83e340df90cef36bb8f69b707bcf46ec60a846b", Pod:"calico-apiserver-76f68fbb46-gf7hk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali182b58ffca2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.133 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.133 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" iface="eth0" netns="" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.133 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.133 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.176 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.176 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.177 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.185 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.185 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" HandleID="k8s-pod-network.68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" Workload="ci--3510.3.7--n--e6f3637a46-k8s-calico--apiserver--76f68fbb46--gf7hk-eth0" May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.191 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:12.201407 env[1585]: 2025-05-17 00:51:12.194 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8" May 17 00:51:12.201950 env[1585]: time="2025-05-17T00:51:12.201910075Z" level=info msg="TearDown network for sandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" successfully" May 17 00:51:12.214634 env[1585]: time="2025-05-17T00:51:12.214588110Z" level=info msg="RemovePodSandbox \"68fb664acdb0666225f7df70edac777d3107b2eef03485cd5f6839b9fb938df8\" returns successfully" May 17 00:51:12.215276 env[1585]: time="2025-05-17T00:51:12.215252007Z" level=info msg="StopPodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\"" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.286 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5", Pod:"coredns-7c65d6cfc9-9b8l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali253efcd4292", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.287 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.287 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" iface="eth0" netns="" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.287 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.287 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.341 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.341 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.341 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.352 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.352 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.354 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:12.356582 env[1585]: 2025-05-17 00:51:12.355 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.357112 env[1585]: time="2025-05-17T00:51:12.357056829Z" level=info msg="TearDown network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" successfully" May 17 00:51:12.357191 env[1585]: time="2025-05-17T00:51:12.357172185Z" level=info msg="StopPodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" returns successfully" May 17 00:51:12.358202 env[1585]: time="2025-05-17T00:51:12.358176150Z" level=info msg="RemovePodSandbox for \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\"" May 17 00:51:12.358441 env[1585]: time="2025-05-17T00:51:12.358402822Z" level=info msg="Forcibly stopping sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\"" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.402 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7f1c698e-516f-4fe7-9f6f-d4affc1db4e0", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-e6f3637a46", ContainerID:"ad0295b1304b706758d78a9205db6816499cc5c23698552326a69b4e1cd4e0e5", Pod:"coredns-7c65d6cfc9-9b8l5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali253efcd4292", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.403 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.403 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" iface="eth0" netns="" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.403 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.403 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.433 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.434 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.434 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.443 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.443 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" HandleID="k8s-pod-network.3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" Workload="ci--3510.3.7--n--e6f3637a46-k8s-coredns--7c65d6cfc9--9b8l5-eth0" May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.444 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:51:12.447411 env[1585]: 2025-05-17 00:51:12.446 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b" May 17 00:51:12.448015 env[1585]: time="2025-05-17T00:51:12.447930999Z" level=info msg="TearDown network for sandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" successfully" May 17 00:51:12.458569 env[1585]: time="2025-05-17T00:51:12.458529147Z" level=info msg="RemovePodSandbox \"3614f971a497edf8393fdce5aeda1987060e3902956a9f529477ae69032ea88b\" returns successfully" May 17 00:51:14.020047 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.nEV5pi.mount: Deactivated successfully. May 17 00:51:19.282469 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.vdnfj9.mount: Deactivated successfully. May 17 00:51:19.482456 kubelet[2655]: E0517 00:51:19.482417 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:51:22.482185 kubelet[2655]: E0517 00:51:22.482129 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:51:25.100600 kubelet[2655]: I0517 00:51:25.100564 2655 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:51:25.181000 audit[5628]: NETFILTER_CFG table=filter:137 family=2 entries=12 op=nft_register_rule pid=5628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:25.186771 kernel: kauditd_printk_skb: 2 callbacks suppressed May 17 00:51:25.186896 kernel: audit: type=1325 audit(1747443085.181:438): table=filter:137 family=2 entries=12 op=nft_register_rule pid=5628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:25.181000 audit[5628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffee4e4990 a2=0 a3=1 items=0 ppid=2769 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:25.226137 kernel: audit: type=1300 audit(1747443085.181:438): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffee4e4990 a2=0 a3=1 items=0 ppid=2769 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:25.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:25.239926 kernel: audit: type=1327 audit(1747443085.181:438): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:25.240000 audit[5628]: NETFILTER_CFG table=nat:138 family=2 entries=34 op=nft_register_chain pid=5628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:25.255251 kernel: audit: type=1325 audit(1747443085.240:439): table=nat:138 family=2 entries=34 op=nft_register_chain pid=5628 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:51:25.240000 audit[5628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffee4e4990 a2=0 a3=1 items=0 ppid=2769 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:25.288954 kernel: audit: type=1300 audit(1747443085.240:439): arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffee4e4990 a2=0 a3=1 items=0 ppid=2769 pid=5628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:25.240000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:25.303417 kernel: audit: type=1327 audit(1747443085.240:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:51:32.483298 env[1585]: time="2025-05-17T00:51:32.483254484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:51:32.803015 env[1585]: time="2025-05-17T00:51:32.802953334Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:32.806692 env[1585]: time="2025-05-17T00:51:32.806629721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:32.807097 kubelet[2655]: E0517 00:51:32.807037 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:51:32.807400 kubelet[2655]: E0517 00:51:32.807095 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:51:32.807400 kubelet[2655]: E0517 00:51:32.807197 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c3d4b8a8cf3948f2b6deeb355e35bdc2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:32.809455 env[1585]: time="2025-05-17T00:51:32.809418809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:51:32.983340 env[1585]: time="2025-05-17T00:51:32.983276418Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:32.986200 env[1585]: time="2025-05-17T00:51:32.986139625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:32.986410 kubelet[2655]: E0517 00:51:32.986370 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:51:32.986495 kubelet[2655]: E0517 00:51:32.986420 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:51:32.986610 kubelet[2655]: E0517 00:51:32.986545 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:32.987920 kubelet[2655]: E0517 00:51:32.987866 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:51:33.149054 systemd[1]: run-containerd-runc-k8s.io-75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d-runc.zKM8AY.mount: Deactivated successfully. May 17 00:51:33.481953 env[1585]: time="2025-05-17T00:51:33.481832992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:51:33.659607 env[1585]: time="2025-05-17T00:51:33.659540842Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:51:33.662166 env[1585]: time="2025-05-17T00:51:33.662126537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:51:33.662385 kubelet[2655]: E0517 00:51:33.662337 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:51:33.662526 kubelet[2655]: E0517 00:51:33.662509 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:51:33.663141 kubelet[2655]: E0517 00:51:33.663088 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:51:33.664450 kubelet[2655]: E0517 00:51:33.664416 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:51:44.025136 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.vTqP6j.mount: Deactivated successfully. May 17 00:51:46.481758 kubelet[2655]: E0517 00:51:46.481720 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:51:47.482341 kubelet[2655]: E0517 00:51:47.482306 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:51:58.482266 kubelet[2655]: E0517 00:51:58.482218 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:52:01.483867 kubelet[2655]: E0517 00:52:01.483817 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:52:03.142135 systemd[1]: run-containerd-runc-k8s.io-75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d-runc.Mci4li.mount: Deactivated successfully. May 17 00:52:12.482158 kubelet[2655]: E0517 00:52:12.482037 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:52:14.021919 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.RkBu3A.mount: Deactivated successfully. May 17 00:52:15.481575 env[1585]: time="2025-05-17T00:52:15.481523290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:52:15.655058 env[1585]: time="2025-05-17T00:52:15.654865871Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:52:15.658450 env[1585]: time="2025-05-17T00:52:15.658391529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:52:15.658851 kubelet[2655]: E0517 00:52:15.658809 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:52:15.659153 kubelet[2655]: E0517 00:52:15.658871 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:52:15.659283 kubelet[2655]: E0517 00:52:15.658988 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c3d4b8a8cf3948f2b6deeb355e35bdc2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:52:15.661629 env[1585]: time="2025-05-17T00:52:15.661408877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:52:15.814988 env[1585]: time="2025-05-17T00:52:15.814928122Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:52:15.819914 env[1585]: time="2025-05-17T00:52:15.819864636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:52:15.820286 kubelet[2655]: E0517 00:52:15.820233 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:52:15.820364 kubelet[2655]: E0517 00:52:15.820304 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:52:15.820740 kubelet[2655]: E0517 00:52:15.820426 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:52:15.821999 kubelet[2655]: E0517 00:52:15.821969 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:52:19.287040 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.SKkrxp.mount: Deactivated successfully. May 17 00:52:27.481548 env[1585]: time="2025-05-17T00:52:27.481479623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:52:27.636110 env[1585]: time="2025-05-17T00:52:27.635884457Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:52:27.639692 env[1585]: time="2025-05-17T00:52:27.639616931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:52:27.639942 kubelet[2655]: E0517 00:52:27.639900 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:52:27.640289 kubelet[2655]: E0517 00:52:27.640270 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:52:27.640535 kubelet[2655]: E0517 00:52:27.640458 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:52:27.642052 kubelet[2655]: E0517 00:52:27.642005 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:52:30.483374 kubelet[2655]: E0517 00:52:30.483330 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:52:33.142586 systemd[1]: run-containerd-runc-k8s.io-75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d-runc.YTyCLw.mount: Deactivated successfully. May 17 00:52:41.481666 kubelet[2655]: E0517 00:52:41.481617 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:52:44.017890 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.K8iW4H.mount: Deactivated successfully. May 17 00:52:45.481757 kubelet[2655]: E0517 00:52:45.481717 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:52:55.481742 kubelet[2655]: E0517 00:52:55.481695 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:52:58.482132 kubelet[2655]: E0517 00:52:58.482093 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:53:01.153715 systemd[1]: Started sshd@7-10.200.20.24:22-10.200.16.10:42818.service. May 17 00:53:01.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.24:22-10.200.16.10:42818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:01.176582 kernel: audit: type=1130 audit(1747443181.152:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.24:22-10.200.16.10:42818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:01.634983 sshd[5819]: Accepted publickey for core from 10.200.16.10 port 42818 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:01.633000 audit[5819]: USER_ACCT pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.663398 sshd[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:01.661000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.692446 kernel: audit: type=1101 audit(1747443181.633:441): pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.692589 kernel: audit: type=1103 audit(1747443181.661:442): pid=5819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.706801 kernel: audit: type=1006 audit(1747443181.661:443): pid=5819 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 17 00:53:01.661000 audit[5819]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4b85e40 a2=3 a3=1 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:01.731558 kernel: audit: type=1300 audit(1747443181.661:443): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4b85e40 a2=3 a3=1 items=0 ppid=1 pid=5819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:01.661000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:01.740187 kernel: audit: type=1327 audit(1747443181.661:443): proctitle=737368643A20636F7265205B707269765D May 17 00:53:01.742941 systemd-logind[1574]: New session 10 of user core. May 17 00:53:01.743495 systemd[1]: Started session-10.scope. May 17 00:53:01.748000 audit[5819]: USER_START pid=5819 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.749000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.797002 kernel: audit: type=1105 audit(1747443181.748:444): pid=5819 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:01.797159 kernel: audit: type=1103 audit(1747443181.749:445): pid=5822 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:02.174759 sshd[5819]: pam_unix(sshd:session): session closed for user core May 17 00:53:02.174000 audit[5819]: USER_END pid=5819 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:02.174000 audit[5819]: CRED_DISP pid=5819 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:02.202710 systemd[1]: sshd@7-10.200.20.24:22-10.200.16.10:42818.service: Deactivated successfully. May 17 00:53:02.203582 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:53:02.226378 kernel: audit: type=1106 audit(1747443182.174:446): pid=5819 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:02.226570 kernel: audit: type=1104 audit(1747443182.174:447): pid=5819 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:02.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.24:22-10.200.16.10:42818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:02.227693 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. May 17 00:53:02.228981 systemd-logind[1574]: Removed session 10. May 17 00:53:07.254988 systemd[1]: Started sshd@8-10.200.20.24:22-10.200.16.10:42830.service. May 17 00:53:07.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.24:22-10.200.16.10:42830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:07.262044 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:53:07.262177 kernel: audit: type=1130 audit(1747443187.256:449): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.24:22-10.200.16.10:42830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:07.481703 kubelet[2655]: E0517 00:53:07.481664 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:53:07.754111 sshd[5855]: Accepted publickey for core from 10.200.16.10 port 42830 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:07.753000 audit[5855]: USER_ACCT pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.782347 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:07.788749 systemd[1]: Started session-11.scope. May 17 00:53:07.789920 systemd-logind[1574]: New session 11 of user core. May 17 00:53:07.781000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.824679 kernel: audit: type=1101 audit(1747443187.753:450): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.824815 kernel: audit: type=1103 audit(1747443187.781:451): pid=5855 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.839842 kernel: audit: type=1006 audit(1747443187.781:452): pid=5855 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 17 00:53:07.781000 audit[5855]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb0fea30 a2=3 a3=1 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:07.864983 kernel: audit: type=1300 audit(1747443187.781:452): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeb0fea30 a2=3 a3=1 items=0 ppid=1 pid=5855 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:07.781000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:07.874117 kernel: audit: type=1327 audit(1747443187.781:452): proctitle=737368643A20636F7265205B707269765D May 17 00:53:07.800000 audit[5855]: USER_START pid=5855 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.900367 kernel: audit: type=1105 audit(1747443187.800:453): pid=5855 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.802000 audit[5858]: CRED_ACQ pid=5858 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:07.923396 kernel: audit: type=1103 audit(1747443187.802:454): pid=5858 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:08.211853 sshd[5855]: pam_unix(sshd:session): session closed for user core May 17 00:53:08.212000 audit[5855]: USER_END pid=5855 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:08.226237 systemd[1]: sshd@8-10.200.20.24:22-10.200.16.10:42830.service: Deactivated successfully. May 17 00:53:08.227218 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:53:08.228910 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. May 17 00:53:08.229836 systemd-logind[1574]: Removed session 11. May 17 00:53:08.212000 audit[5855]: CRED_DISP pid=5855 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:08.264008 kernel: audit: type=1106 audit(1747443188.212:455): pid=5855 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:08.264162 kernel: audit: type=1104 audit(1747443188.212:456): pid=5855 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:08.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.24:22-10.200.16.10:42830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.484651 kubelet[2655]: E0517 00:53:10.484612 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:53:13.286916 systemd[1]: Started sshd@9-10.200.20.24:22-10.200.16.10:49150.service. May 17 00:53:13.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.24:22-10.200.16.10:49150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.292413 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:53:13.292525 kernel: audit: type=1130 audit(1747443193.287:458): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.24:22-10.200.16.10:49150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.765000 audit[5870]: USER_ACCT pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.766370 sshd[5870]: Accepted publickey for core from 10.200.16.10 port 49150 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:13.789000 audit[5870]: CRED_ACQ pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.791014 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:13.813316 kernel: audit: type=1101 audit(1747443193.765:459): pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.813520 kernel: audit: type=1103 audit(1747443193.789:460): pid=5870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.827460 kernel: audit: type=1006 audit(1747443193.790:461): pid=5870 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 May 17 00:53:13.790000 audit[5870]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5c468e0 a2=3 a3=1 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:13.856451 kernel: audit: type=1300 audit(1747443193.790:461): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5c468e0 a2=3 a3=1 items=0 ppid=1 pid=5870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:13.790000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:13.865389 kernel: audit: type=1327 audit(1747443193.790:461): proctitle=737368643A20636F7265205B707269765D May 17 00:53:13.867513 systemd-logind[1574]: New session 12 of user core. May 17 00:53:13.867982 systemd[1]: Started session-12.scope. May 17 00:53:13.872000 audit[5870]: USER_START pid=5870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.872000 audit[5873]: CRED_ACQ pid=5873 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.921680 kernel: audit: type=1105 audit(1747443193.872:462): pid=5870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:13.921824 kernel: audit: type=1103 audit(1747443193.872:463): pid=5873 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.208104 sshd[5870]: pam_unix(sshd:session): session closed for user core May 17 00:53:14.208000 audit[5870]: USER_END pid=5870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.236211 systemd[1]: sshd@9-10.200.20.24:22-10.200.16.10:49150.service: Deactivated successfully. May 17 00:53:14.208000 audit[5870]: CRED_DISP pid=5870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.258442 kernel: audit: type=1106 audit(1747443194.208:464): pid=5870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.258565 kernel: audit: type=1104 audit(1747443194.208:465): pid=5870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.237031 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:53:14.258806 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. May 17 00:53:14.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.24:22-10.200.16.10:49150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:14.260062 systemd-logind[1574]: Removed session 12. May 17 00:53:14.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.24:22-10.200.16.10:49160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:14.288402 systemd[1]: Started sshd@10-10.200.20.24:22-10.200.16.10:49160.service. May 17 00:53:14.770000 audit[5902]: USER_ACCT pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.771621 sshd[5902]: Accepted publickey for core from 10.200.16.10 port 49160 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:14.772000 audit[5902]: CRED_ACQ pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.772000 audit[5902]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc16839d0 a2=3 a3=1 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:14.772000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:14.773636 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:14.778277 systemd[1]: Started session-13.scope. May 17 00:53:14.779242 systemd-logind[1574]: New session 13 of user core. May 17 00:53:14.784000 audit[5902]: USER_START pid=5902 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:14.785000 audit[5905]: CRED_ACQ pid=5905 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:15.236670 sshd[5902]: pam_unix(sshd:session): session closed for user core May 17 00:53:15.237000 audit[5902]: USER_END pid=5902 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:15.237000 audit[5902]: CRED_DISP pid=5902 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:15.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.24:22-10.200.16.10:49160 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:15.239696 systemd[1]: sshd@10-10.200.20.24:22-10.200.16.10:49160.service: Deactivated successfully. May 17 00:53:15.241329 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:53:15.241876 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. May 17 00:53:15.243155 systemd-logind[1574]: Removed session 13. May 17 00:53:15.309123 systemd[1]: Started sshd@11-10.200.20.24:22-10.200.16.10:49170.service. May 17 00:53:15.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.24:22-10.200.16.10:49170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:15.765000 audit[5913]: USER_ACCT pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:15.766829 sshd[5913]: Accepted publickey for core from 10.200.16.10 port 49170 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:15.766000 audit[5913]: CRED_ACQ pid=5913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:15.767000 audit[5913]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0dc4160 a2=3 a3=1 items=0 ppid=1 pid=5913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:15.767000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:15.769082 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:15.773877 systemd[1]: Started session-14.scope. May 17 00:53:15.774436 systemd-logind[1574]: New session 14 of user core. May 17 00:53:15.779000 audit[5913]: USER_START pid=5913 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:15.780000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:16.171585 sshd[5913]: pam_unix(sshd:session): session closed for user core May 17 00:53:16.171000 audit[5913]: USER_END pid=5913 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:16.171000 audit[5913]: CRED_DISP pid=5913 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:16.175936 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. May 17 00:53:16.176044 systemd[1]: sshd@11-10.200.20.24:22-10.200.16.10:49170.service: Deactivated successfully. May 17 00:53:16.176949 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:53:16.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.24:22-10.200.16.10:49170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:16.177382 systemd-logind[1574]: Removed session 14. May 17 00:53:21.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.24:22-10.200.16.10:56952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:21.250685 systemd[1]: Started sshd@12-10.200.20.24:22-10.200.16.10:56952.service. May 17 00:53:21.256095 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 00:53:21.256201 kernel: audit: type=1130 audit(1747443201.249:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.24:22-10.200.16.10:56952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:21.481921 kubelet[2655]: E0517 00:53:21.481872 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:53:21.738000 audit[5947]: USER_ACCT pid=5947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.740710 sshd[5947]: Accepted publickey for core from 10.200.16.10 port 56952 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:21.762000 audit[5947]: CRED_ACQ pid=5947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.764802 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:21.785361 kernel: audit: type=1101 audit(1747443201.738:486): pid=5947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.785536 kernel: audit: type=1103 audit(1747443201.762:487): pid=5947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.799038 kernel: audit: type=1006 audit(1747443201.762:488): pid=5947 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 00:53:21.762000 audit[5947]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4afc2e0 a2=3 a3=1 items=0 ppid=1 pid=5947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:21.802477 systemd[1]: Started session-15.scope. May 17 00:53:21.803279 systemd-logind[1574]: New session 15 of user core. May 17 00:53:21.822404 kernel: audit: type=1300 audit(1747443201.762:488): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc4afc2e0 a2=3 a3=1 items=0 ppid=1 pid=5947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:21.762000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:21.831067 kernel: audit: type=1327 audit(1747443201.762:488): proctitle=737368643A20636F7265205B707269765D May 17 00:53:21.829000 audit[5947]: USER_START pid=5947 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.856000 audit[5950]: CRED_ACQ pid=5950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.878770 kernel: audit: type=1105 audit(1747443201.829:489): pid=5947 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:21.878890 kernel: audit: type=1103 audit(1747443201.856:490): pid=5950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:22.205310 sshd[5947]: pam_unix(sshd:session): session closed for user core May 17 00:53:22.205000 audit[5947]: USER_END pid=5947 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:22.209215 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. May 17 00:53:22.210812 systemd[1]: sshd@12-10.200.20.24:22-10.200.16.10:56952.service: Deactivated successfully. May 17 00:53:22.211648 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:53:22.213113 systemd-logind[1574]: Removed session 15. May 17 00:53:22.206000 audit[5947]: CRED_DISP pid=5947 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:22.253885 kernel: audit: type=1106 audit(1747443202.205:491): pid=5947 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:22.254027 kernel: audit: type=1104 audit(1747443202.206:492): pid=5947 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:22.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.24:22-10.200.16.10:56952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:25.482403 kubelet[2655]: E0517 00:53:25.482328 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:53:27.284199 systemd[1]: Started sshd@13-10.200.20.24:22-10.200.16.10:56960.service. May 17 00:53:27.311936 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:53:27.312082 kernel: audit: type=1130 audit(1747443207.284:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.24:22-10.200.16.10:56960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:27.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.24:22-10.200.16.10:56960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:27.762000 audit[5964]: USER_ACCT pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.764276 sshd[5964]: Accepted publickey for core from 10.200.16.10 port 56960 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:27.766113 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:27.764000 audit[5964]: CRED_ACQ pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.807817 kernel: audit: type=1101 audit(1747443207.762:495): pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.807938 kernel: audit: type=1103 audit(1747443207.764:496): pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.822003 kernel: audit: type=1006 audit(1747443207.764:497): pid=5964 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 17 00:53:27.764000 audit[5964]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc86f9e20 a2=3 a3=1 items=0 ppid=1 pid=5964 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:27.845778 kernel: audit: type=1300 audit(1747443207.764:497): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc86f9e20 a2=3 a3=1 items=0 ppid=1 pid=5964 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:27.764000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:27.847300 systemd-logind[1574]: New session 16 of user core. May 17 00:53:27.847614 systemd[1]: Started session-16.scope. May 17 00:53:27.857507 kernel: audit: type=1327 audit(1747443207.764:497): proctitle=737368643A20636F7265205B707269765D May 17 00:53:27.858000 audit[5964]: USER_START pid=5964 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.860000 audit[5967]: CRED_ACQ pid=5967 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.905890 kernel: audit: type=1105 audit(1747443207.858:498): pid=5964 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:27.906044 kernel: audit: type=1103 audit(1747443207.860:499): pid=5967 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:28.214820 sshd[5964]: pam_unix(sshd:session): session closed for user core May 17 00:53:28.214000 audit[5964]: USER_END pid=5964 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:28.217832 systemd[1]: sshd@13-10.200.20.24:22-10.200.16.10:56960.service: Deactivated successfully. May 17 00:53:28.218693 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:53:28.214000 audit[5964]: CRED_DISP pid=5964 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:28.264415 kernel: audit: type=1106 audit(1747443208.214:500): pid=5964 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:28.264564 kernel: audit: type=1104 audit(1747443208.214:501): pid=5964 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:28.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.24:22-10.200.16.10:56960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.264818 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. May 17 00:53:28.265823 systemd-logind[1574]: Removed session 16. May 17 00:53:33.288759 systemd[1]: Started sshd@14-10.200.20.24:22-10.200.16.10:59026.service. May 17 00:53:33.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.24:22-10.200.16.10:59026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.294743 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:53:33.294858 kernel: audit: type=1130 audit(1747443213.287:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.24:22-10.200.16.10:59026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.767276 sshd[5999]: Accepted publickey for core from 10.200.16.10 port 59026 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:33.765000 audit[5999]: USER_ACCT pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.791798 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:33.790000 audit[5999]: CRED_ACQ pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.816576 kernel: audit: type=1101 audit(1747443213.765:504): pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.816710 kernel: audit: type=1103 audit(1747443213.790:505): pid=5999 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.831146 kernel: audit: type=1006 audit(1747443213.790:506): pid=5999 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 17 00:53:33.790000 audit[5999]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe110f6e0 a2=3 a3=1 items=0 ppid=1 pid=5999 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:33.855590 kernel: audit: type=1300 audit(1747443213.790:506): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe110f6e0 a2=3 a3=1 items=0 ppid=1 pid=5999 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:33.790000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:33.859657 systemd[1]: Started session-17.scope. May 17 00:53:33.860581 systemd-logind[1574]: New session 17 of user core. May 17 00:53:33.864573 kernel: audit: type=1327 audit(1747443213.790:506): proctitle=737368643A20636F7265205B707269765D May 17 00:53:33.865000 audit[5999]: USER_START pid=5999 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.866000 audit[6008]: CRED_ACQ pid=6008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.915697 kernel: audit: type=1105 audit(1747443213.865:507): pid=5999 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:33.915825 kernel: audit: type=1103 audit(1747443213.866:508): pid=6008 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:34.209705 sshd[5999]: pam_unix(sshd:session): session closed for user core May 17 00:53:34.209000 audit[5999]: USER_END pid=5999 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:34.212572 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. May 17 00:53:34.213842 systemd[1]: sshd@14-10.200.20.24:22-10.200.16.10:59026.service: Deactivated successfully. May 17 00:53:34.214714 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:53:34.215983 systemd-logind[1574]: Removed session 17. May 17 00:53:34.209000 audit[5999]: CRED_DISP pid=5999 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:34.258580 kernel: audit: type=1106 audit(1747443214.209:509): pid=5999 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:34.258711 kernel: audit: type=1104 audit(1747443214.209:510): pid=5999 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:34.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.24:22-10.200.16.10:59026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:35.481652 kubelet[2655]: E0517 00:53:35.481319 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:53:38.482137 env[1585]: time="2025-05-17T00:53:38.481841897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:53:38.671520 env[1585]: time="2025-05-17T00:53:38.671313428Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:53:38.675197 env[1585]: time="2025-05-17T00:53:38.675081987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:53:38.675372 kubelet[2655]: E0517 00:53:38.675325 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:53:38.675701 kubelet[2655]: E0517 00:53:38.675378 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:53:38.675701 kubelet[2655]: E0517 00:53:38.675470 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c3d4b8a8cf3948f2b6deeb355e35bdc2,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:53:38.677871 env[1585]: time="2025-05-17T00:53:38.677691321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:53:38.878967 env[1585]: time="2025-05-17T00:53:38.878902698Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:53:38.882942 env[1585]: time="2025-05-17T00:53:38.882883142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:53:38.883244 kubelet[2655]: E0517 00:53:38.883207 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:53:38.883429 kubelet[2655]: E0517 00:53:38.883411 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:53:38.883665 kubelet[2655]: E0517 00:53:38.883622 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2v85t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f88f6fd44-x7zmt_calico-system(90b80adb-a2bc-44f9-b9c7-b7b81f9e9897): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:53:38.885181 kubelet[2655]: E0517 00:53:38.885134 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:53:39.291537 systemd[1]: Started sshd@15-10.200.20.24:22-10.200.16.10:43032.service. May 17 00:53:39.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.24:22-10.200.16.10:43032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:39.297076 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:53:39.297212 kernel: audit: type=1130 audit(1747443219.291:512): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.24:22-10.200.16.10:43032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:39.786000 audit[6021]: USER_ACCT pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.788445 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:39.789716 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 43032 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:39.787000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.832766 kernel: audit: type=1101 audit(1747443219.786:513): pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.832884 kernel: audit: type=1103 audit(1747443219.787:514): pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.847458 kernel: audit: type=1006 audit(1747443219.787:515): pid=6021 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 17 00:53:39.850042 systemd[1]: Started session-18.scope. May 17 00:53:39.851421 systemd-logind[1574]: New session 18 of user core. May 17 00:53:39.787000 audit[6021]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedeab080 a2=3 a3=1 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:39.876162 kernel: audit: type=1300 audit(1747443219.787:515): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedeab080 a2=3 a3=1 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:39.787000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:39.890624 kernel: audit: type=1327 audit(1747443219.787:515): proctitle=737368643A20636F7265205B707269765D May 17 00:53:39.890729 kernel: audit: type=1105 audit(1747443219.880:516): pid=6021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.880000 audit[6021]: USER_START pid=6021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.881000 audit[6024]: CRED_ACQ pid=6024 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:39.937394 kernel: audit: type=1103 audit(1747443219.881:517): pid=6024 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.341745 sshd[6021]: pam_unix(sshd:session): session closed for user core May 17 00:53:40.342000 audit[6021]: USER_END pid=6021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.345222 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. May 17 00:53:40.346000 systemd[1]: sshd@15-10.200.20.24:22-10.200.16.10:43032.service: Deactivated successfully. May 17 00:53:40.346859 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:53:40.348096 systemd-logind[1574]: Removed session 18. May 17 00:53:40.342000 audit[6021]: CRED_DISP pid=6021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.388693 kernel: audit: type=1106 audit(1747443220.342:518): pid=6021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.388854 kernel: audit: type=1104 audit(1747443220.342:519): pid=6021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.24:22-10.200.16.10:43032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:40.421444 systemd[1]: Started sshd@16-10.200.20.24:22-10.200.16.10:43036.service. May 17 00:53:40.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.24:22-10.200.16.10:43036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:40.905272 sshd[6033]: Accepted publickey for core from 10.200.16.10 port 43036 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:40.904000 audit[6033]: USER_ACCT pid=6033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.906897 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:40.906000 audit[6033]: CRED_ACQ pid=6033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.906000 audit[6033]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff9cc8620 a2=3 a3=1 items=0 ppid=1 pid=6033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:40.906000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:40.911614 systemd[1]: Started session-19.scope. May 17 00:53:40.911818 systemd-logind[1574]: New session 19 of user core. May 17 00:53:40.916000 audit[6033]: USER_START pid=6033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:40.917000 audit[6036]: CRED_ACQ pid=6036 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:41.444035 sshd[6033]: pam_unix(sshd:session): session closed for user core May 17 00:53:41.444000 audit[6033]: USER_END pid=6033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:41.445000 audit[6033]: CRED_DISP pid=6033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:41.447290 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. May 17 00:53:41.448230 systemd[1]: sshd@16-10.200.20.24:22-10.200.16.10:43036.service: Deactivated successfully. May 17 00:53:41.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.24:22-10.200.16.10:43036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:41.449131 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:53:41.449606 systemd-logind[1574]: Removed session 19. May 17 00:53:41.517361 systemd[1]: Started sshd@17-10.200.20.24:22-10.200.16.10:43046.service. May 17 00:53:41.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.24:22-10.200.16.10:43046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:41.969000 audit[6044]: USER_ACCT pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:41.970532 sshd[6044]: Accepted publickey for core from 10.200.16.10 port 43046 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:41.971000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:41.971000 audit[6044]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7fa9100 a2=3 a3=1 items=0 ppid=1 pid=6044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:41.971000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:41.973292 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:41.977943 systemd[1]: Started session-20.scope. May 17 00:53:41.978274 systemd-logind[1574]: New session 20 of user core. May 17 00:53:41.982000 audit[6044]: USER_START pid=6044 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:41.983000 audit[6047]: CRED_ACQ pid=6047 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.025017 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.OSXV7F.mount: Deactivated successfully. May 17 00:53:44.168000 audit[6078]: NETFILTER_CFG table=filter:139 family=2 entries=24 op=nft_register_rule pid=6078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:44.168000 audit[6078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=ffffde0d58f0 a2=0 a3=1 items=0 ppid=2769 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:44.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:44.176000 audit[6078]: NETFILTER_CFG table=nat:140 family=2 entries=22 op=nft_register_rule pid=6078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:44.176000 audit[6078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffde0d58f0 a2=0 a3=1 items=0 ppid=2769 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:44.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:44.191000 audit[6080]: NETFILTER_CFG table=filter:141 family=2 entries=36 op=nft_register_rule pid=6080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:44.191000 audit[6080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=fffffc8a2a30 a2=0 a3=1 items=0 ppid=2769 pid=6080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:44.191000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:44.199000 audit[6080]: NETFILTER_CFG table=nat:142 family=2 entries=22 op=nft_register_rule pid=6080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:44.199000 audit[6080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffffc8a2a30 a2=0 a3=1 items=0 ppid=2769 pid=6080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:44.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:44.259725 sshd[6044]: pam_unix(sshd:session): session closed for user core May 17 00:53:44.260000 audit[6044]: USER_END pid=6044 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.260000 audit[6044]: CRED_DISP pid=6044 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.262867 systemd[1]: sshd@17-10.200.20.24:22-10.200.16.10:43046.service: Deactivated successfully. May 17 00:53:44.263719 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:53:44.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.24:22-10.200.16.10:43046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:44.264256 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. May 17 00:53:44.265133 systemd-logind[1574]: Removed session 20. May 17 00:53:44.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.24:22-10.200.16.10:43058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:44.338546 systemd[1]: Started sshd@18-10.200.20.24:22-10.200.16.10:43058.service. May 17 00:53:44.344504 kernel: kauditd_printk_skb: 35 callbacks suppressed May 17 00:53:44.344605 kernel: audit: type=1130 audit(1747443224.338:543): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.24:22-10.200.16.10:43058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:44.819000 audit[6083]: USER_ACCT pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.820738 sshd[6083]: Accepted publickey for core from 10.200.16.10 port 43058 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:44.825168 sshd[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:44.824000 audit[6083]: CRED_ACQ pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.864719 kernel: audit: type=1101 audit(1747443224.819:544): pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.864841 kernel: audit: type=1103 audit(1747443224.824:545): pid=6083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.867567 systemd[1]: Started session-21.scope. May 17 00:53:44.880501 systemd-logind[1574]: New session 21 of user core. May 17 00:53:44.824000 audit[6083]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd99db7f0 a2=3 a3=1 items=0 ppid=1 pid=6083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:44.905609 kernel: audit: type=1006 audit(1747443224.824:546): pid=6083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 17 00:53:44.905727 kernel: audit: type=1300 audit(1747443224.824:546): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd99db7f0 a2=3 a3=1 items=0 ppid=1 pid=6083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:44.824000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:44.913672 kernel: audit: type=1327 audit(1747443224.824:546): proctitle=737368643A20636F7265205B707269765D May 17 00:53:44.917000 audit[6083]: USER_START pid=6083 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.919000 audit[6086]: CRED_ACQ pid=6086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.966309 kernel: audit: type=1105 audit(1747443224.917:547): pid=6083 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:44.966419 kernel: audit: type=1103 audit(1747443224.919:548): pid=6086 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.385627 sshd[6083]: pam_unix(sshd:session): session closed for user core May 17 00:53:45.386000 audit[6083]: USER_END pid=6083 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.388761 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. May 17 00:53:45.389696 systemd[1]: sshd@18-10.200.20.24:22-10.200.16.10:43058.service: Deactivated successfully. May 17 00:53:45.390514 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:53:45.391905 systemd-logind[1574]: Removed session 21. May 17 00:53:45.386000 audit[6083]: CRED_DISP pid=6083 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.432509 kernel: audit: type=1106 audit(1747443225.386:549): pid=6083 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.432655 kernel: audit: type=1104 audit(1747443225.386:550): pid=6083 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.24:22-10.200.16.10:43058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:45.464128 systemd[1]: Started sshd@19-10.200.20.24:22-10.200.16.10:43062.service. May 17 00:53:45.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.24:22-10.200.16.10:43062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:45.945000 audit[6093]: USER_ACCT pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.946368 sshd[6093]: Accepted publickey for core from 10.200.16.10 port 43062 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:45.947000 audit[6093]: CRED_ACQ pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.947000 audit[6093]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9dad6a0 a2=3 a3=1 items=0 ppid=1 pid=6093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:45.947000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:45.947961 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:45.952315 systemd[1]: Started session-22.scope. May 17 00:53:45.953241 systemd-logind[1574]: New session 22 of user core. May 17 00:53:45.958000 audit[6093]: USER_START pid=6093 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:45.959000 audit[6096]: CRED_ACQ pid=6096 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:46.369373 sshd[6093]: pam_unix(sshd:session): session closed for user core May 17 00:53:46.369000 audit[6093]: USER_END pid=6093 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:46.369000 audit[6093]: CRED_DISP pid=6093 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:46.372479 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. May 17 00:53:46.372820 systemd[1]: sshd@19-10.200.20.24:22-10.200.16.10:43062.service: Deactivated successfully. May 17 00:53:46.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.24:22-10.200.16.10:43062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:46.373675 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:53:46.374996 systemd-logind[1574]: Removed session 22. May 17 00:53:46.482882 kubelet[2655]: E0517 00:53:46.482837 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:53:49.481823 kubelet[2655]: E0517 00:53:49.481787 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:53:51.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.24:22-10.200.16.10:39188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:51.451433 systemd[1]: Started sshd@20-10.200.20.24:22-10.200.16.10:39188.service. May 17 00:53:51.456606 kernel: kauditd_printk_skb: 12 callbacks suppressed May 17 00:53:51.456681 kernel: audit: type=1130 audit(1747443231.451:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.24:22-10.200.16.10:39188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:51.662000 audit[6110]: NETFILTER_CFG table=filter:143 family=2 entries=24 op=nft_register_rule pid=6110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:51.662000 audit[6110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffda5626a0 a2=0 a3=1 items=0 ppid=2769 pid=6110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:51.702807 kernel: audit: type=1325 audit(1747443231.662:562): table=filter:143 family=2 entries=24 op=nft_register_rule pid=6110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:51.702960 kernel: audit: type=1300 audit(1747443231.662:562): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffda5626a0 a2=0 a3=1 items=0 ppid=2769 pid=6110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:51.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:51.716197 kernel: audit: type=1327 audit(1747443231.662:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:51.707000 audit[6110]: NETFILTER_CFG table=nat:144 family=2 entries=106 op=nft_register_chain pid=6110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:51.729669 kernel: audit: type=1325 audit(1747443231.707:563): table=nat:144 family=2 entries=106 op=nft_register_chain pid=6110 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 00:53:51.707000 audit[6110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffda5626a0 a2=0 a3=1 items=0 ppid=2769 pid=6110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:51.757032 kernel: audit: type=1300 audit(1747443231.707:563): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffda5626a0 a2=0 a3=1 items=0 ppid=2769 pid=6110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:51.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:51.770107 kernel: audit: type=1327 audit(1747443231.707:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 00:53:51.916000 audit[6107]: USER_ACCT pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:51.916890 sshd[6107]: Accepted publickey for core from 10.200.16.10 port 39188 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:51.918713 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:51.917000 audit[6107]: CRED_ACQ pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:51.961189 kernel: audit: type=1101 audit(1747443231.916:564): pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:51.961320 kernel: audit: type=1103 audit(1747443231.917:565): pid=6107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:51.975197 kernel: audit: type=1006 audit(1747443231.917:566): pid=6107 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 17 00:53:51.917000 audit[6107]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3919c10 a2=3 a3=1 items=0 ppid=1 pid=6107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:51.917000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:51.977821 systemd-logind[1574]: New session 23 of user core. May 17 00:53:51.979294 systemd[1]: Started session-23.scope. May 17 00:53:51.983000 audit[6107]: USER_START pid=6107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:51.984000 audit[6113]: CRED_ACQ pid=6113 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:52.311163 sshd[6107]: pam_unix(sshd:session): session closed for user core May 17 00:53:52.311000 audit[6107]: USER_END pid=6107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:52.311000 audit[6107]: CRED_DISP pid=6107 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:52.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.24:22-10.200.16.10:39188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:52.314009 systemd[1]: sshd@20-10.200.20.24:22-10.200.16.10:39188.service: Deactivated successfully. May 17 00:53:52.314848 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:53:52.315154 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. May 17 00:53:52.316097 systemd-logind[1574]: Removed session 23. May 17 00:53:57.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.24:22-10.200.16.10:39200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:57.391024 systemd[1]: Started sshd@21-10.200.20.24:22-10.200.16.10:39200.service. May 17 00:53:57.396195 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:53:57.396280 kernel: audit: type=1130 audit(1747443237.390:572): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.24:22-10.200.16.10:39200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:57.483393 env[1585]: time="2025-05-17T00:53:57.483048811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:53:57.631789 env[1585]: time="2025-05-17T00:53:57.631673704Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 00:53:57.636440 env[1585]: time="2025-05-17T00:53:57.636394892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 00:53:57.636687 kubelet[2655]: E0517 00:53:57.636642 2655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:53:57.636986 kubelet[2655]: E0517 00:53:57.636707 2655 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:53:57.636986 kubelet[2655]: E0517 00:53:57.636841 2655 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-7vz96_calico-system(91ca7308-a9d6-4eb9-b8c4-045158a71d72): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 00:53:57.638301 kubelet[2655]: E0517 00:53:57.638272 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:53:57.877000 audit[6123]: USER_ACCT pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:57.878337 sshd[6123]: Accepted publickey for core from 10.200.16.10 port 39200 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:57.880209 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:57.879000 audit[6123]: CRED_ACQ pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:57.924125 kernel: audit: type=1101 audit(1747443237.877:573): pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:57.924216 kernel: audit: type=1103 audit(1747443237.879:574): pid=6123 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:57.929201 systemd-logind[1574]: New session 24 of user core. May 17 00:53:57.929949 systemd[1]: Started session-24.scope. May 17 00:53:57.947587 kernel: audit: type=1006 audit(1747443237.879:575): pid=6123 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 17 00:53:57.879000 audit[6123]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe246a430 a2=3 a3=1 items=0 ppid=1 pid=6123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:57.973609 kernel: audit: type=1300 audit(1747443237.879:575): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe246a430 a2=3 a3=1 items=0 ppid=1 pid=6123 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:57.879000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:53:57.982364 kernel: audit: type=1327 audit(1747443237.879:575): proctitle=737368643A20636F7265205B707269765D May 17 00:53:57.949000 audit[6123]: USER_START pid=6123 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.007534 kernel: audit: type=1105 audit(1747443237.949:576): pid=6123 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.007643 kernel: audit: type=1103 audit(1747443237.973:577): pid=6127 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:57.973000 audit[6127]: CRED_ACQ pid=6127 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.315837 sshd[6123]: pam_unix(sshd:session): session closed for user core May 17 00:53:58.316000 audit[6123]: USER_END pid=6123 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.343127 systemd[1]: sshd@21-10.200.20.24:22-10.200.16.10:39200.service: Deactivated successfully. May 17 00:53:58.344029 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:53:58.316000 audit[6123]: CRED_DISP pid=6123 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.345535 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. May 17 00:53:58.366174 kernel: audit: type=1106 audit(1747443238.316:578): pid=6123 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.366284 kernel: audit: type=1104 audit(1747443238.316:579): pid=6123 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:53:58.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.24:22-10.200.16.10:39200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:58.367457 systemd-logind[1574]: Removed session 24. May 17 00:54:03.149182 systemd[1]: run-containerd-runc-k8s.io-75e593a713a9fd929dc68141be34b151fc613e4cdd67d5336c9238b80d8b1a8d-runc.r0ckSC.mount: Deactivated successfully. May 17 00:54:03.395275 systemd[1]: Started sshd@22-10.200.20.24:22-10.200.16.10:55910.service. May 17 00:54:03.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.24:22-10.200.16.10:55910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:03.402287 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:54:03.402401 kernel: audit: type=1130 audit(1747443243.396:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.24:22-10.200.16.10:55910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:03.910205 sshd[6158]: Accepted publickey for core from 10.200.16.10 port 55910 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:54:03.908000 audit[6158]: USER_ACCT pid=6158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:03.933948 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:03.932000 audit[6158]: CRED_ACQ pid=6158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:03.941914 systemd[1]: Started session-25.scope. May 17 00:54:03.943093 systemd-logind[1574]: New session 25 of user core. May 17 00:54:03.963549 kernel: audit: type=1101 audit(1747443243.908:582): pid=6158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:03.963677 kernel: audit: type=1103 audit(1747443243.932:583): pid=6158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.001941 kernel: audit: type=1006 audit(1747443243.932:584): pid=6158 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 17 00:54:03.932000 audit[6158]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdda52d80 a2=3 a3=1 items=0 ppid=1 pid=6158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:04.038847 kernel: audit: type=1300 audit(1747443243.932:584): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdda52d80 a2=3 a3=1 items=0 ppid=1 pid=6158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:03.932000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:54:04.072735 kernel: audit: type=1327 audit(1747443243.932:584): proctitle=737368643A20636F7265205B707269765D May 17 00:54:03.963000 audit[6158]: USER_START pid=6158 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.107723 kernel: audit: type=1105 audit(1747443243.963:585): pid=6158 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:03.965000 audit[6175]: CRED_ACQ pid=6175 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.130390 kernel: audit: type=1103 audit(1747443243.965:586): pid=6175 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.365625 sshd[6158]: pam_unix(sshd:session): session closed for user core May 17 00:54:04.365000 audit[6158]: USER_END pid=6158 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.395618 systemd[1]: sshd@22-10.200.20.24:22-10.200.16.10:55910.service: Deactivated successfully. May 17 00:54:04.396415 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:54:04.365000 audit[6158]: CRED_DISP pid=6158 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.397100 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. May 17 00:54:04.418936 kernel: audit: type=1106 audit(1747443244.365:587): pid=6158 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.419062 kernel: audit: type=1104 audit(1747443244.365:588): pid=6158 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:04.419873 systemd-logind[1574]: Removed session 25. May 17 00:54:04.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.24:22-10.200.16.10:55910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:04.482993 kubelet[2655]: E0517 00:54:04.482945 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:54:09.447914 systemd[1]: Started sshd@23-10.200.20.24:22-10.200.16.10:56104.service. May 17 00:54:09.478205 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:54:09.478279 kernel: audit: type=1130 audit(1747443249.447:590): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.24:22-10.200.16.10:56104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:09.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.24:22-10.200.16.10:56104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:09.964097 sshd[6193]: Accepted publickey for core from 10.200.16.10 port 56104 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:54:09.962000 audit[6193]: USER_ACCT pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:09.989735 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:09.996714 systemd[1]: Started session-26.scope. May 17 00:54:09.996920 systemd-logind[1574]: New session 26 of user core. May 17 00:54:09.987000 audit[6193]: CRED_ACQ pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.039012 kernel: audit: type=1101 audit(1747443249.962:591): pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.039152 kernel: audit: type=1103 audit(1747443249.987:592): pid=6193 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.074417 kernel: audit: type=1006 audit(1747443249.987:593): pid=6193 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 17 00:54:09.987000 audit[6193]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0705ab0 a2=3 a3=1 items=0 ppid=1 pid=6193 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:10.098160 kernel: audit: type=1300 audit(1747443249.987:593): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff0705ab0 a2=3 a3=1 items=0 ppid=1 pid=6193 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:09.987000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:54:10.113096 kernel: audit: type=1327 audit(1747443249.987:593): proctitle=737368643A20636F7265205B707269765D May 17 00:54:10.010000 audit[6193]: USER_START pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.011000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.165529 kernel: audit: type=1105 audit(1747443250.010:594): pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.165665 kernel: audit: type=1103 audit(1747443250.011:595): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.388561 sshd[6193]: pam_unix(sshd:session): session closed for user core May 17 00:54:10.387000 audit[6193]: USER_END pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.394171 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. May 17 00:54:10.408463 systemd[1]: sshd@23-10.200.20.24:22-10.200.16.10:56104.service: Deactivated successfully. May 17 00:54:10.409407 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:54:10.410769 systemd-logind[1574]: Removed session 26. May 17 00:54:10.388000 audit[6193]: CRED_DISP pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.437568 kernel: audit: type=1106 audit(1747443250.387:596): pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.437706 kernel: audit: type=1104 audit(1747443250.388:597): pid=6193 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:10.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.24:22-10.200.16.10:56104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:11.481631 kubelet[2655]: E0517 00:54:11.481585 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72" May 17 00:54:15.462423 systemd[1]: Started sshd@24-10.200.20.24:22-10.200.16.10:56116.service. May 17 00:54:15.490252 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:54:15.490386 kernel: audit: type=1130 audit(1747443255.461:599): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.24:22-10.200.16.10:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:15.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.24:22-10.200.16.10:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:15.912000 audit[6230]: USER_ACCT pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:15.914280 sshd[6230]: Accepted publickey for core from 10.200.16.10 port 56116 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:54:15.916235 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:15.914000 audit[6230]: CRED_ACQ pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:15.960107 kernel: audit: type=1101 audit(1747443255.912:600): pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:15.960209 kernel: audit: type=1103 audit(1747443255.914:601): pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:15.974714 kernel: audit: type=1006 audit(1747443255.914:602): pid=6230 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 May 17 00:54:15.914000 audit[6230]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc849250 a2=3 a3=1 items=0 ppid=1 pid=6230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:15.998845 kernel: audit: type=1300 audit(1747443255.914:602): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc849250 a2=3 a3=1 items=0 ppid=1 pid=6230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:15.914000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:54:16.006976 kernel: audit: type=1327 audit(1747443255.914:602): proctitle=737368643A20636F7265205B707269765D May 17 00:54:16.010025 systemd-logind[1574]: New session 27 of user core. May 17 00:54:16.010451 systemd[1]: Started session-27.scope. May 17 00:54:16.014000 audit[6230]: USER_START pid=6230 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.043260 kernel: audit: type=1105 audit(1747443256.014:603): pid=6230 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.043342 kernel: audit: type=1103 audit(1747443256.040:604): pid=6233 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.040000 audit[6233]: CRED_ACQ pid=6233 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.385704 sshd[6230]: pam_unix(sshd:session): session closed for user core May 17 00:54:16.385000 audit[6230]: USER_END pid=6230 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.389288 systemd[1]: sshd@24-10.200.20.24:22-10.200.16.10:56116.service: Deactivated successfully. May 17 00:54:16.390188 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:54:16.385000 audit[6230]: CRED_DISP pid=6230 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.433872 kernel: audit: type=1106 audit(1747443256.385:605): pid=6230 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.433972 kernel: audit: type=1104 audit(1747443256.385:606): pid=6230 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:16.413333 systemd-logind[1574]: Session 27 logged out. Waiting for processes to exit. May 17 00:54:16.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.24:22-10.200.16.10:56116 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:16.434887 systemd-logind[1574]: Removed session 27. May 17 00:54:18.483171 kubelet[2655]: E0517 00:54:18.483135 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-5f88f6fd44-x7zmt" podUID="90b80adb-a2bc-44f9-b9c7-b7b81f9e9897" May 17 00:54:19.284900 systemd[1]: run-containerd-runc-k8s.io-2981edc5740b9aacec79d62fb50dd41beb913adb5a745378c007fbc1e3e9f3db-runc.opWOeB.mount: Deactivated successfully. May 17 00:54:21.472668 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 00:54:21.472845 kernel: audit: type=1130 audit(1747443261.458:608): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.24:22-10.200.16.10:50862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:21.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.24:22-10.200.16.10:50862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:21.459501 systemd[1]: Started sshd@25-10.200.20.24:22-10.200.16.10:50862.service. May 17 00:54:21.910000 audit[6265]: USER_ACCT pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:21.912817 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 50862 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:54:21.936559 kernel: audit: type=1101 audit(1747443261.910:609): pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:21.935000 audit[6265]: CRED_ACQ pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:21.937517 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:21.973088 kernel: audit: type=1103 audit(1747443261.935:610): pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:21.973202 kernel: audit: type=1006 audit(1747443261.935:611): pid=6265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 May 17 00:54:21.935000 audit[6265]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffefd07100 a2=3 a3=1 items=0 ppid=1 pid=6265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:21.997398 kernel: audit: type=1300 audit(1747443261.935:611): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffefd07100 a2=3 a3=1 items=0 ppid=1 pid=6265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:54:21.935000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 00:54:22.005386 kernel: audit: type=1327 audit(1747443261.935:611): proctitle=737368643A20636F7265205B707269765D May 17 00:54:22.008776 systemd-logind[1574]: New session 28 of user core. May 17 00:54:22.009236 systemd[1]: Started session-28.scope. May 17 00:54:22.012000 audit[6265]: USER_START pid=6265 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.012000 audit[6268]: CRED_ACQ pid=6268 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.061008 kernel: audit: type=1105 audit(1747443262.012:612): pid=6265 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.061107 kernel: audit: type=1103 audit(1747443262.012:613): pid=6268 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.344460 sshd[6265]: pam_unix(sshd:session): session closed for user core May 17 00:54:22.343000 audit[6265]: USER_END pid=6265 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.348112 systemd[1]: sshd@25-10.200.20.24:22-10.200.16.10:50862.service: Deactivated successfully. May 17 00:54:22.348964 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:54:22.345000 audit[6265]: CRED_DISP pid=6265 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.371693 systemd-logind[1574]: Session 28 logged out. Waiting for processes to exit. May 17 00:54:22.372699 systemd-logind[1574]: Removed session 28. May 17 00:54:22.392456 kernel: audit: type=1106 audit(1747443262.343:614): pid=6265 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.392998 kernel: audit: type=1104 audit(1747443262.345:615): pid=6265 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' May 17 00:54:22.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.24:22-10.200.16.10:50862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:54:22.482398 kubelet[2655]: E0517 00:54:22.482359 2655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-7vz96" podUID="91ca7308-a9d6-4eb9-b8c4-045158a71d72"