May 17 00:51:22.007709 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:51:22.007726 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri May 16 23:24:21 -00 2025 May 17 00:51:22.007734 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 17 00:51:22.007741 kernel: printk: bootconsole [pl11] enabled May 17 00:51:22.007746 kernel: efi: EFI v2.70 by EDK II May 17 00:51:22.007752 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 May 17 00:51:22.007758 kernel: random: crng init done May 17 00:51:22.007763 kernel: ACPI: Early table checksum verification disabled May 17 00:51:22.007769 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 17 00:51:22.007774 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007779 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007785 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 17 00:51:22.007791 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007797 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007803 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007809 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007815 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007822 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007827 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 17 00:51:22.007833 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:51:22.007839 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 17 00:51:22.007844 kernel: NUMA: Failed to initialise from firmware May 17 00:51:22.007850 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:51:22.007856 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] May 17 00:51:22.007862 kernel: Zone ranges: May 17 00:51:22.007867 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 17 00:51:22.007873 kernel: DMA32 empty May 17 00:51:22.007878 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:51:22.007885 kernel: Movable zone start for each node May 17 00:51:22.007891 kernel: Early memory node ranges May 17 00:51:22.007896 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 17 00:51:22.007902 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 17 00:51:22.007908 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 17 00:51:22.007913 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 17 00:51:22.007919 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 17 00:51:22.007924 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 17 00:51:22.007930 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:51:22.007935 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:51:22.007941 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 17 00:51:22.007947 kernel: psci: probing for conduit method from ACPI. May 17 00:51:22.007956 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:51:22.007962 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:51:22.007968 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:51:22.007974 kernel: psci: SMC Calling Convention v1.4 May 17 00:51:22.007980 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 May 17 00:51:22.007987 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 May 17 00:51:22.007993 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 17 00:51:22.007999 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 17 00:51:22.008005 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:51:22.008011 kernel: Detected PIPT I-cache on CPU0 May 17 00:51:22.008017 kernel: CPU features: detected: GIC system register CPU interface May 17 00:51:22.008023 kernel: CPU features: detected: Hardware dirty bit management May 17 00:51:22.008029 kernel: CPU features: detected: Spectre-BHB May 17 00:51:22.008035 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:51:22.008041 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:51:22.008047 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:51:22.008054 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 17 00:51:22.008060 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:51:22.008066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 17 00:51:22.008072 kernel: Policy zone: Normal May 17 00:51:22.008079 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:51:22.008086 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:51:22.008092 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:51:22.008098 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:51:22.008104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:51:22.008110 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) May 17 00:51:22.008116 kernel: Memory: 3986940K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207220K reserved, 0K cma-reserved) May 17 00:51:22.008124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:51:22.008130 kernel: trace event string verifier disabled May 17 00:51:22.008136 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:51:22.008143 kernel: rcu: RCU event tracing is enabled. May 17 00:51:22.008149 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:51:22.008155 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:51:22.008161 kernel: Tracing variant of Tasks RCU enabled. May 17 00:51:22.008167 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:51:22.008173 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:51:22.008179 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:51:22.008185 kernel: GICv3: 960 SPIs implemented May 17 00:51:22.008193 kernel: GICv3: 0 Extended SPIs implemented May 17 00:51:22.008199 kernel: GICv3: Distributor has no Range Selector support May 17 00:51:22.008204 kernel: Root IRQ handler: gic_handle_irq May 17 00:51:22.008210 kernel: GICv3: 16 PPIs implemented May 17 00:51:22.008216 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 17 00:51:22.008222 kernel: ITS: No ITS available, not enabling LPIs May 17 00:51:22.008228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:51:22.008234 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:51:22.008240 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:51:22.008247 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:51:22.008253 kernel: Console: colour dummy device 80x25 May 17 00:51:22.008260 kernel: printk: console [tty1] enabled May 17 00:51:22.008267 kernel: ACPI: Core revision 20210730 May 17 00:51:22.008273 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:51:22.008279 kernel: pid_max: default: 32768 minimum: 301 May 17 00:51:22.008286 kernel: LSM: Security Framework initializing May 17 00:51:22.008291 kernel: SELinux: Initializing. May 17 00:51:22.008298 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:51:22.008304 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:51:22.008310 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 17 00:51:22.008318 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 17 00:51:22.008383 kernel: rcu: Hierarchical SRCU implementation. May 17 00:51:22.008389 kernel: Remapping and enabling EFI services. May 17 00:51:22.008395 kernel: smp: Bringing up secondary CPUs ... May 17 00:51:22.008401 kernel: Detected PIPT I-cache on CPU1 May 17 00:51:22.008408 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 17 00:51:22.008414 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:51:22.008420 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:51:22.008426 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:51:22.008432 kernel: SMP: Total of 2 processors activated. May 17 00:51:22.008441 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:51:22.008447 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 17 00:51:22.008454 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:51:22.008460 kernel: CPU features: detected: CRC32 instructions May 17 00:51:22.008466 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:51:22.008472 kernel: CPU features: detected: LSE atomic instructions May 17 00:51:22.008479 kernel: CPU features: detected: Privileged Access Never May 17 00:51:22.008485 kernel: CPU: All CPU(s) started at EL1 May 17 00:51:22.008491 kernel: alternatives: patching kernel code May 17 00:51:22.008498 kernel: devtmpfs: initialized May 17 00:51:22.008509 kernel: KASLR enabled May 17 00:51:22.008515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:51:22.008523 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:51:22.008530 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:51:22.008536 kernel: SMBIOS 3.1.0 present. May 17 00:51:22.008543 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 17 00:51:22.008549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:51:22.008556 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:51:22.008564 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:51:22.008570 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:51:22.008577 kernel: audit: initializing netlink subsys (disabled) May 17 00:51:22.008583 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 May 17 00:51:22.008590 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:51:22.008596 kernel: cpuidle: using governor menu May 17 00:51:22.008603 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:51:22.008611 kernel: ASID allocator initialised with 32768 entries May 17 00:51:22.008617 kernel: ACPI: bus type PCI registered May 17 00:51:22.008624 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:51:22.008630 kernel: Serial: AMBA PL011 UART driver May 17 00:51:22.008637 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:51:22.008643 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:51:22.008650 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:51:22.008656 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:51:22.008663 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:51:22.008670 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:51:22.008677 kernel: ACPI: Added _OSI(Module Device) May 17 00:51:22.008684 kernel: ACPI: Added _OSI(Processor Device) May 17 00:51:22.008690 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:51:22.008696 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:51:22.008703 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:51:22.008709 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:51:22.008716 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:51:22.008722 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:51:22.008730 kernel: ACPI: Interpreter enabled May 17 00:51:22.008736 kernel: ACPI: Using GIC for interrupt routing May 17 00:51:22.008743 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 17 00:51:22.008749 kernel: printk: console [ttyAMA0] enabled May 17 00:51:22.008756 kernel: printk: bootconsole [pl11] disabled May 17 00:51:22.008762 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 17 00:51:22.008769 kernel: iommu: Default domain type: Translated May 17 00:51:22.008775 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:51:22.008782 kernel: vgaarb: loaded May 17 00:51:22.008788 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:51:22.008796 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:51:22.008802 kernel: PTP clock support registered May 17 00:51:22.008809 kernel: Registered efivars operations May 17 00:51:22.008815 kernel: No ACPI PMU IRQ for CPU0 May 17 00:51:22.008821 kernel: No ACPI PMU IRQ for CPU1 May 17 00:51:22.008828 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:51:22.008834 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:51:22.008841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:51:22.008849 kernel: pnp: PnP ACPI init May 17 00:51:22.008855 kernel: pnp: PnP ACPI: found 0 devices May 17 00:51:22.008862 kernel: NET: Registered PF_INET protocol family May 17 00:51:22.008868 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:51:22.008875 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:51:22.008882 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:51:22.008888 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:51:22.008895 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:51:22.008901 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:51:22.008909 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:51:22.008916 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:51:22.008922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:51:22.008929 kernel: PCI: CLS 0 bytes, default 64 May 17 00:51:22.008935 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 17 00:51:22.008942 kernel: kvm [1]: HYP mode not available May 17 00:51:22.008948 kernel: Initialise system trusted keyrings May 17 00:51:22.008955 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:51:22.008961 kernel: Key type asymmetric registered May 17 00:51:22.008969 kernel: Asymmetric key parser 'x509' registered May 17 00:51:22.008975 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:51:22.008982 kernel: io scheduler mq-deadline registered May 17 00:51:22.008988 kernel: io scheduler kyber registered May 17 00:51:22.008994 kernel: io scheduler bfq registered May 17 00:51:22.009001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:51:22.009007 kernel: thunder_xcv, ver 1.0 May 17 00:51:22.009014 kernel: thunder_bgx, ver 1.0 May 17 00:51:22.009020 kernel: nicpf, ver 1.0 May 17 00:51:22.009026 kernel: nicvf, ver 1.0 May 17 00:51:22.009146 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:51:22.009207 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:51:21 UTC (1747443081) May 17 00:51:22.009216 kernel: efifb: probing for efifb May 17 00:51:22.009223 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:51:22.009229 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:51:22.009236 kernel: efifb: scrolling: redraw May 17 00:51:22.009242 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:51:22.009251 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:51:22.009258 kernel: fb0: EFI VGA frame buffer device May 17 00:51:22.009264 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 17 00:51:22.009271 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:51:22.009277 kernel: NET: Registered PF_INET6 protocol family May 17 00:51:22.009284 kernel: Segment Routing with IPv6 May 17 00:51:22.009290 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:51:22.009297 kernel: NET: Registered PF_PACKET protocol family May 17 00:51:22.009303 kernel: Key type dns_resolver registered May 17 00:51:22.009310 kernel: registered taskstats version 1 May 17 00:51:22.009318 kernel: Loading compiled-in X.509 certificates May 17 00:51:22.009337 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 2fa973ae674d09a62938b8c6a2b9446b5340adb7' May 17 00:51:22.009344 kernel: Key type .fscrypt registered May 17 00:51:22.009350 kernel: Key type fscrypt-provisioning registered May 17 00:51:22.009357 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:51:22.009363 kernel: ima: Allocated hash algorithm: sha1 May 17 00:51:22.009370 kernel: ima: No architecture policies found May 17 00:51:22.009376 kernel: clk: Disabling unused clocks May 17 00:51:22.009385 kernel: Freeing unused kernel memory: 36416K May 17 00:51:22.009391 kernel: Run /init as init process May 17 00:51:22.009398 kernel: with arguments: May 17 00:51:22.009404 kernel: /init May 17 00:51:22.009411 kernel: with environment: May 17 00:51:22.009417 kernel: HOME=/ May 17 00:51:22.009423 kernel: TERM=linux May 17 00:51:22.009430 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:51:22.009439 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:51:22.009449 systemd[1]: Detected virtualization microsoft. May 17 00:51:22.009456 systemd[1]: Detected architecture arm64. May 17 00:51:22.009463 systemd[1]: Running in initrd. May 17 00:51:22.009470 systemd[1]: No hostname configured, using default hostname. May 17 00:51:22.009477 systemd[1]: Hostname set to . May 17 00:51:22.009484 systemd[1]: Initializing machine ID from random generator. May 17 00:51:22.009491 systemd[1]: Queued start job for default target initrd.target. May 17 00:51:22.009500 systemd[1]: Started systemd-ask-password-console.path. May 17 00:51:22.009507 systemd[1]: Reached target cryptsetup.target. May 17 00:51:22.009514 systemd[1]: Reached target paths.target. May 17 00:51:22.009520 systemd[1]: Reached target slices.target. May 17 00:51:22.009527 systemd[1]: Reached target swap.target. May 17 00:51:22.009534 systemd[1]: Reached target timers.target. May 17 00:51:22.009541 systemd[1]: Listening on iscsid.socket. May 17 00:51:22.009548 systemd[1]: Listening on iscsiuio.socket. May 17 00:51:22.009557 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:51:22.009564 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:51:22.009571 systemd[1]: Listening on systemd-journald.socket. May 17 00:51:22.009578 systemd[1]: Listening on systemd-networkd.socket. May 17 00:51:22.009585 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:51:22.009592 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:51:22.009599 systemd[1]: Reached target sockets.target. May 17 00:51:22.009606 systemd[1]: Starting kmod-static-nodes.service... May 17 00:51:22.009613 systemd[1]: Finished network-cleanup.service. May 17 00:51:22.009622 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:51:22.009629 systemd[1]: Starting systemd-journald.service... May 17 00:51:22.009636 systemd[1]: Starting systemd-modules-load.service... May 17 00:51:22.009643 systemd[1]: Starting systemd-resolved.service... May 17 00:51:22.009650 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:51:22.009660 systemd-journald[276]: Journal started May 17 00:51:22.009699 systemd-journald[276]: Runtime Journal (/run/log/journal/9f9f5fa46bcb468c8091a39837cd76ed) is 8.0M, max 78.5M, 70.5M free. May 17 00:51:21.993367 systemd-modules-load[277]: Inserted module 'overlay' May 17 00:51:22.040346 systemd[1]: Started systemd-journald.service. May 17 00:51:22.040399 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:51:22.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.045260 systemd[1]: Finished kmod-static-nodes.service. May 17 00:51:22.089335 kernel: audit: type=1130 audit(1747443082.044:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.089360 kernel: Bridge firewalling registered May 17 00:51:22.089369 kernel: audit: type=1130 audit(1747443082.069:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.049788 systemd-resolved[278]: Positive Trust Anchors: May 17 00:51:22.049797 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:51:22.133605 kernel: audit: type=1130 audit(1747443082.093:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.049824 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:51:22.203040 kernel: audit: type=1130 audit(1747443082.119:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.203074 kernel: SCSI subsystem initialized May 17 00:51:22.203084 kernel: audit: type=1130 audit(1747443082.124:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.051848 systemd-resolved[278]: Defaulting to hostname 'linux'. May 17 00:51:22.231269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:51:22.231289 kernel: device-mapper: uevent: version 1.0.3 May 17 00:51:22.231298 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:51:22.069580 systemd[1]: Started systemd-resolved.service. May 17 00:51:22.073502 systemd-modules-load[277]: Inserted module 'br_netfilter' May 17 00:51:22.094042 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:51:22.119619 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:51:22.124681 systemd[1]: Reached target nss-lookup.target. May 17 00:51:22.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.184162 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:51:22.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.234921 systemd-modules-load[277]: Inserted module 'dm_multipath' May 17 00:51:22.336193 kernel: audit: type=1130 audit(1747443082.265:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.336215 kernel: audit: type=1130 audit(1747443082.290:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.336230 kernel: audit: type=1130 audit(1747443082.316:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.237215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:51:22.246060 systemd[1]: Finished systemd-modules-load.service. May 17 00:51:22.265645 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:51:22.365066 dracut-cmdline[296]: dracut-dracut-053 May 17 00:51:22.365066 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t May 17 00:51:22.365066 dracut-cmdline[296]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:51:22.291197 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:51:22.428803 kernel: audit: type=1130 audit(1747443082.408:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.336440 systemd[1]: Starting dracut-cmdline.service... May 17 00:51:22.347386 systemd[1]: Starting systemd-sysctl.service... May 17 00:51:22.370051 systemd[1]: Finished systemd-sysctl.service. May 17 00:51:22.489343 kernel: Loading iSCSI transport class v2.0-870. May 17 00:51:22.504347 kernel: iscsi: registered transport (tcp) May 17 00:51:22.524586 kernel: iscsi: registered transport (qla4xxx) May 17 00:51:22.524650 kernel: QLogic iSCSI HBA Driver May 17 00:51:22.553521 systemd[1]: Finished dracut-cmdline.service. May 17 00:51:22.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:22.558738 systemd[1]: Starting dracut-pre-udev.service... May 17 00:51:22.610339 kernel: raid6: neonx8 gen() 13721 MB/s May 17 00:51:22.630333 kernel: raid6: neonx8 xor() 10836 MB/s May 17 00:51:22.651333 kernel: raid6: neonx4 gen() 13536 MB/s May 17 00:51:22.671335 kernel: raid6: neonx4 xor() 11110 MB/s May 17 00:51:22.691331 kernel: raid6: neonx2 gen() 12999 MB/s May 17 00:51:22.712331 kernel: raid6: neonx2 xor() 10244 MB/s May 17 00:51:22.732330 kernel: raid6: neonx1 gen() 10643 MB/s May 17 00:51:22.753331 kernel: raid6: neonx1 xor() 8903 MB/s May 17 00:51:22.774336 kernel: raid6: int64x8 gen() 6268 MB/s May 17 00:51:22.794331 kernel: raid6: int64x8 xor() 3544 MB/s May 17 00:51:22.814331 kernel: raid6: int64x4 gen() 7272 MB/s May 17 00:51:22.835332 kernel: raid6: int64x4 xor() 3859 MB/s May 17 00:51:22.855331 kernel: raid6: int64x2 gen() 6156 MB/s May 17 00:51:22.875335 kernel: raid6: int64x2 xor() 3320 MB/s May 17 00:51:22.896331 kernel: raid6: int64x1 gen() 5047 MB/s May 17 00:51:22.920530 kernel: raid6: int64x1 xor() 2647 MB/s May 17 00:51:22.920548 kernel: raid6: using algorithm neonx8 gen() 13721 MB/s May 17 00:51:22.920564 kernel: raid6: .... xor() 10836 MB/s, rmw enabled May 17 00:51:22.924685 kernel: raid6: using neon recovery algorithm May 17 00:51:22.946601 kernel: xor: measuring software checksum speed May 17 00:51:22.946613 kernel: 8regs : 17202 MB/sec May 17 00:51:22.950396 kernel: 32regs : 20712 MB/sec May 17 00:51:22.954134 kernel: arm64_neon : 27851 MB/sec May 17 00:51:22.954146 kernel: xor: using function: arm64_neon (27851 MB/sec) May 17 00:51:23.014343 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 17 00:51:23.023253 systemd[1]: Finished dracut-pre-udev.service. May 17 00:51:23.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:23.031000 audit: BPF prog-id=7 op=LOAD May 17 00:51:23.031000 audit: BPF prog-id=8 op=LOAD May 17 00:51:23.032266 systemd[1]: Starting systemd-udevd.service... May 17 00:51:23.046549 systemd-udevd[476]: Using default interface naming scheme 'v252'. May 17 00:51:23.051884 systemd[1]: Started systemd-udevd.service. May 17 00:51:23.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:23.062899 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:51:23.078559 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 17 00:51:23.104179 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:51:23.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:23.109552 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:51:23.145584 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:51:23.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:23.193345 kernel: hv_vmbus: Vmbus version:5.3 May 17 00:51:23.224358 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:51:23.224407 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:51:23.224417 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:51:23.224425 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:51:23.235041 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:51:23.246000 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:51:23.255346 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:51:23.260340 kernel: scsi host0: storvsc_host_t May 17 00:51:23.272911 kernel: scsi host1: storvsc_host_t May 17 00:51:23.273080 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:51:23.280331 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:51:23.298368 kernel: sr 1:0:0:2: [sr0] scsi-1 drive May 17 00:51:23.299360 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:51:23.299381 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 May 17 00:51:23.309634 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:51:23.338257 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks May 17 00:51:23.338386 kernel: sd 1:0:0:0: [sda] Write Protect is off May 17 00:51:23.338472 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:51:23.338550 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:51:23.338627 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:51:23.338636 kernel: sd 1:0:0:0: [sda] Attached SCSI disk May 17 00:51:23.338711 kernel: hv_netvsc 002248b4-cabe-0022-48b4-cabe002248b4 eth0: VF slot 1 added May 17 00:51:23.348358 kernel: hv_vmbus: registering driver hv_pci May 17 00:51:23.356345 kernel: hv_pci 206bcd7a-81c9-4db9-91b7-60d06543e7e4: PCI VMBus probing: Using version 0x10004 May 17 00:51:23.490448 kernel: hv_pci 206bcd7a-81c9-4db9-91b7-60d06543e7e4: PCI host bridge to bus 81c9:00 May 17 00:51:23.490556 kernel: pci_bus 81c9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 17 00:51:23.490673 kernel: pci_bus 81c9:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:51:23.490769 kernel: pci 81c9:00:02.0: [15b3:1018] type 00 class 0x020000 May 17 00:51:23.490877 kernel: pci 81c9:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:51:23.490958 kernel: pci 81c9:00:02.0: enabling Extended Tags May 17 00:51:23.491046 kernel: pci 81c9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 81c9:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 17 00:51:23.491123 kernel: pci_bus 81c9:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:51:23.491226 kernel: pci 81c9:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:51:23.533357 kernel: mlx5_core 81c9:00:02.0: firmware version: 16.30.1284 May 17 00:51:23.751364 kernel: mlx5_core 81c9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) May 17 00:51:23.751490 kernel: hv_netvsc 002248b4-cabe-0022-48b4-cabe002248b4 eth0: VF registering: eth1 May 17 00:51:23.751588 kernel: mlx5_core 81c9:00:02.0 eth1: joined to eth0 May 17 00:51:23.759342 kernel: mlx5_core 81c9:00:02.0 enP33225s1: renamed from eth1 May 17 00:51:23.919350 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (529) May 17 00:51:23.931757 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:51:23.964049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:51:24.109152 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:51:24.203907 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:51:24.217632 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:51:24.224130 systemd[1]: Starting disk-uuid.service... May 17 00:51:24.245356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:51:25.261343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:51:25.261658 disk-uuid[604]: The operation has completed successfully. May 17 00:51:25.311197 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:51:25.316464 systemd[1]: Finished disk-uuid.service. May 17 00:51:25.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.329461 systemd[1]: Starting verity-setup.service... May 17 00:51:25.371424 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:51:25.567162 systemd[1]: Found device dev-mapper-usr.device. May 17 00:51:25.577550 systemd[1]: Mounting sysusr-usr.mount... May 17 00:51:25.581367 systemd[1]: Finished verity-setup.service. May 17 00:51:25.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.642337 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:51:25.642928 systemd[1]: Mounted sysusr-usr.mount. May 17 00:51:25.646885 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:51:25.647702 systemd[1]: Starting ignition-setup.service... May 17 00:51:25.655249 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:51:25.690084 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:51:25.690141 kernel: BTRFS info (device sda6): using free space tree May 17 00:51:25.694745 kernel: BTRFS info (device sda6): has skinny extents May 17 00:51:25.739608 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:51:25.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.750000 audit: BPF prog-id=9 op=LOAD May 17 00:51:25.751455 systemd[1]: Starting systemd-networkd.service... May 17 00:51:25.775308 systemd-networkd[844]: lo: Link UP May 17 00:51:25.775319 systemd-networkd[844]: lo: Gained carrier May 17 00:51:25.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.775745 systemd-networkd[844]: Enumeration completed May 17 00:51:25.779196 systemd[1]: Started systemd-networkd.service. May 17 00:51:25.779874 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:51:25.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.784718 systemd[1]: Reached target network.target. May 17 00:51:25.793809 systemd[1]: Starting iscsiuio.service... May 17 00:51:25.829435 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:51:25.829435 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:51:25.829435 iscsid[854]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:51:25.829435 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:51:25.829435 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:51:25.829435 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:51:25.829435 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:51:25.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.807146 systemd[1]: Started iscsiuio.service. May 17 00:51:25.812840 systemd[1]: Starting iscsid.service... May 17 00:51:25.833633 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:51:25.833995 systemd[1]: Started iscsid.service. May 17 00:51:25.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:25.846352 systemd[1]: Starting dracut-initqueue.service... May 17 00:51:25.885768 systemd[1]: Finished dracut-initqueue.service. May 17 00:51:25.891081 systemd[1]: Reached target remote-fs-pre.target. May 17 00:51:25.901741 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:51:25.988178 kernel: mlx5_core 81c9:00:02.0 enP33225s1: Link up May 17 00:51:25.907060 systemd[1]: Reached target remote-fs.target. May 17 00:51:25.919841 systemd[1]: Starting dracut-pre-mount.service... May 17 00:51:25.955233 systemd[1]: Finished dracut-pre-mount.service. May 17 00:51:26.011131 systemd[1]: Finished ignition-setup.service. May 17 00:51:26.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:26.025306 kernel: kauditd_printk_skb: 17 callbacks suppressed May 17 00:51:26.025336 kernel: audit: type=1130 audit(1747443086.016:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:26.042394 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:51:26.059348 kernel: hv_netvsc 002248b4-cabe-0022-48b4-cabe002248b4 eth0: Data path switched to VF: enP33225s1 May 17 00:51:26.059507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:51:26.060482 systemd-networkd[844]: enP33225s1: Link UP May 17 00:51:26.060658 systemd-networkd[844]: eth0: Link UP May 17 00:51:26.060988 systemd-networkd[844]: eth0: Gained carrier May 17 00:51:26.068733 systemd-networkd[844]: enP33225s1: Gained carrier May 17 00:51:26.086391 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:51:28.109474 systemd-networkd[844]: eth0: Gained IPv6LL May 17 00:51:28.774137 ignition[869]: Ignition 2.14.0 May 17 00:51:28.777503 ignition[869]: Stage: fetch-offline May 17 00:51:28.777587 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:28.777617 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:28.878377 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:28.878555 ignition[869]: parsed url from cmdline: "" May 17 00:51:28.878559 ignition[869]: no config URL provided May 17 00:51:28.878565 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:51:28.926401 kernel: audit: type=1130 audit(1747443088.900:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:28.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:28.891827 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:51:28.878573 ignition[869]: no config at "/usr/lib/ignition/user.ign" May 17 00:51:28.901645 systemd[1]: Starting ignition-fetch.service... May 17 00:51:28.878578 ignition[869]: failed to fetch config: resource requires networking May 17 00:51:28.878870 ignition[869]: Ignition finished successfully May 17 00:51:28.929296 ignition[876]: Ignition 2.14.0 May 17 00:51:28.929303 ignition[876]: Stage: fetch May 17 00:51:28.929429 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:28.929448 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:28.941385 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:28.941509 ignition[876]: parsed url from cmdline: "" May 17 00:51:28.941512 ignition[876]: no config URL provided May 17 00:51:28.941518 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:51:28.941525 ignition[876]: no config at "/usr/lib/ignition/user.ign" May 17 00:51:28.941553 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:51:29.025886 ignition[876]: GET result: OK May 17 00:51:29.028363 unknown[876]: fetched base config from "system" May 17 00:51:29.025939 ignition[876]: config has been read from IMDS userdata May 17 00:51:29.064388 kernel: audit: type=1130 audit(1747443089.040:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.028370 unknown[876]: fetched base config from "system" May 17 00:51:29.025957 ignition[876]: parsing config with SHA512: fd3d20f4f0a80ff5f7530707959930d538ab877aeeae716fad1ebcaf9bd37e54710d3f9e635cfea08fc6fbcd2836317d778fff3da7853587bc11d24a25c4cf57 May 17 00:51:29.028376 unknown[876]: fetched user config from "azure" May 17 00:51:29.028759 ignition[876]: fetch: fetch complete May 17 00:51:29.032292 systemd[1]: Finished ignition-fetch.service. May 17 00:51:29.028764 ignition[876]: fetch: fetch passed May 17 00:51:29.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.041741 systemd[1]: Starting ignition-kargs.service... May 17 00:51:29.116472 kernel: audit: type=1130 audit(1747443089.087:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.028802 ignition[876]: Ignition finished successfully May 17 00:51:29.079368 systemd[1]: Finished ignition-kargs.service. May 17 00:51:29.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.072566 ignition[882]: Ignition 2.14.0 May 17 00:51:29.151602 kernel: audit: type=1130 audit(1747443089.124:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.089022 systemd[1]: Starting ignition-disks.service... May 17 00:51:29.072573 ignition[882]: Stage: kargs May 17 00:51:29.117650 systemd[1]: Finished ignition-disks.service. May 17 00:51:29.072684 ignition[882]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:29.125090 systemd[1]: Reached target initrd-root-device.target. May 17 00:51:29.072708 ignition[882]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:29.148514 systemd[1]: Reached target local-fs-pre.target. May 17 00:51:29.075633 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:29.156035 systemd[1]: Reached target local-fs.target. May 17 00:51:29.077742 ignition[882]: kargs: kargs passed May 17 00:51:29.164426 systemd[1]: Reached target sysinit.target. May 17 00:51:29.077803 ignition[882]: Ignition finished successfully May 17 00:51:29.171566 systemd[1]: Reached target basic.target. May 17 00:51:29.098372 ignition[888]: Ignition 2.14.0 May 17 00:51:29.180505 systemd[1]: Starting systemd-fsck-root.service... May 17 00:51:29.098378 ignition[888]: Stage: disks May 17 00:51:29.098485 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:29.098509 ignition[888]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:29.101087 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:29.116247 ignition[888]: disks: disks passed May 17 00:51:29.116299 ignition[888]: Ignition finished successfully May 17 00:51:29.286989 systemd-fsck[896]: ROOT: clean, 619/7326000 files, 481078/7359488 blocks May 17 00:51:29.301638 systemd[1]: Finished systemd-fsck-root.service. May 17 00:51:29.333432 kernel: audit: type=1130 audit(1747443089.306:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:29.310653 systemd[1]: Mounting sysroot.mount... May 17 00:51:29.351095 systemd[1]: Mounted sysroot.mount. May 17 00:51:29.358303 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:51:29.355037 systemd[1]: Reached target initrd-root-fs.target. May 17 00:51:29.392564 systemd[1]: Mounting sysroot-usr.mount... May 17 00:51:29.397278 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:51:29.405167 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:51:29.405209 systemd[1]: Reached target ignition-diskful.target. May 17 00:51:29.411446 systemd[1]: Mounted sysroot-usr.mount. May 17 00:51:29.463194 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:51:29.468401 systemd[1]: Starting initrd-setup-root.service... May 17 00:51:29.498252 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (907) May 17 00:51:29.498308 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:51:29.498557 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:51:29.518075 kernel: BTRFS info (device sda6): using free space tree May 17 00:51:29.518103 kernel: BTRFS info (device sda6): has skinny extents May 17 00:51:29.523465 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:51:29.536533 initrd-setup-root[938]: cut: /sysroot/etc/group: No such file or directory May 17 00:51:29.561025 initrd-setup-root[946]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:51:29.570256 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:51:30.113193 systemd[1]: Finished initrd-setup-root.service. May 17 00:51:30.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:30.119442 systemd[1]: Starting ignition-mount.service... May 17 00:51:30.155671 kernel: audit: type=1130 audit(1747443090.118:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:30.139131 systemd[1]: Starting sysroot-boot.service... May 17 00:51:30.156472 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:51:30.156589 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:51:30.180705 systemd[1]: Finished sysroot-boot.service. May 17 00:51:30.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:30.209351 kernel: audit: type=1130 audit(1747443090.185:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:30.213092 ignition[975]: INFO : Ignition 2.14.0 May 17 00:51:30.217169 ignition[975]: INFO : Stage: mount May 17 00:51:30.217169 ignition[975]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:30.217169 ignition[975]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:30.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:30.259107 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:30.259107 ignition[975]: INFO : mount: mount passed May 17 00:51:30.259107 ignition[975]: INFO : Ignition finished successfully May 17 00:51:30.276647 kernel: audit: type=1130 audit(1747443090.232:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:30.227786 systemd[1]: Finished ignition-mount.service. May 17 00:51:30.977000 coreos-metadata[906]: May 17 00:51:30.976 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:51:30.987309 coreos-metadata[906]: May 17 00:51:30.987 INFO Fetch successful May 17 00:51:31.021023 coreos-metadata[906]: May 17 00:51:31.020 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:51:31.041597 coreos-metadata[906]: May 17 00:51:31.041 INFO Fetch successful May 17 00:51:31.074346 coreos-metadata[906]: May 17 00:51:31.073 INFO wrote hostname ci-3510.3.7-n-6dc47d205e to /sysroot/etc/hostname May 17 00:51:31.082836 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:51:31.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:31.089042 systemd[1]: Starting ignition-files.service... May 17 00:51:31.119861 kernel: audit: type=1130 audit(1747443091.087:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:31.122149 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:51:31.146786 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (985) May 17 00:51:31.146832 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:51:31.146852 kernel: BTRFS info (device sda6): using free space tree May 17 00:51:31.156167 kernel: BTRFS info (device sda6): has skinny extents May 17 00:51:31.160493 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:51:31.174075 ignition[1004]: INFO : Ignition 2.14.0 May 17 00:51:31.174075 ignition[1004]: INFO : Stage: files May 17 00:51:31.184864 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:31.184864 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:31.184864 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:31.184864 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping May 17 00:51:31.184864 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:51:31.184864 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:51:31.248652 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:51:31.256422 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:51:31.264314 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:51:31.263752 unknown[1004]: wrote ssh authorized keys file for user: core May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:51:31.278594 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2667886783" May 17 00:51:31.394517 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2667886783": device or resource busy May 17 00:51:31.394517 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2667886783", trying btrfs: device or resource busy May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2667886783" May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2667886783" May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem2667886783" May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem2667886783" May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3146141475" May 17 00:51:31.394517 ignition[1004]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3146141475": device or resource busy May 17 00:51:31.394517 ignition[1004]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3146141475", trying btrfs: device or resource busy May 17 00:51:31.394517 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3146141475" May 17 00:51:31.291419 systemd[1]: mnt-oem2667886783.mount: Deactivated successfully. May 17 00:51:31.556037 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3146141475" May 17 00:51:31.556037 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3146141475" May 17 00:51:31.556037 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3146141475" May 17 00:51:31.556037 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:51:31.556037 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:51:31.556037 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:51:32.055368 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK May 17 00:51:32.303686 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:51:32.303686 ignition[1004]: INFO : files: op(10): [started] processing unit "waagent.service" May 17 00:51:32.303686 ignition[1004]: INFO : files: op(10): [finished] processing unit "waagent.service" May 17 00:51:32.303686 ignition[1004]: INFO : files: op(11): [started] processing unit "nvidia.service" May 17 00:51:32.362994 kernel: audit: type=1130 audit(1747443092.328:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.317831 systemd[1]: Finished ignition-files.service. May 17 00:51:32.368581 ignition[1004]: INFO : files: op(11): [finished] processing unit "nvidia.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(12): [started] processing unit "containerd.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(12): [finished] processing unit "containerd.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(14): [started] setting preset to enabled for "waagent.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(14): [finished] setting preset to enabled for "waagent.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(15): [started] setting preset to enabled for "nvidia.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: op(15): [finished] setting preset to enabled for "nvidia.service" May 17 00:51:32.368581 ignition[1004]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:51:32.368581 ignition[1004]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:51:32.368581 ignition[1004]: INFO : files: files passed May 17 00:51:32.368581 ignition[1004]: INFO : Ignition finished successfully May 17 00:51:32.609403 kernel: audit: type=1130 audit(1747443092.387:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.609433 kernel: audit: type=1131 audit(1747443092.387:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.609444 kernel: audit: type=1130 audit(1747443092.434:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.609454 kernel: audit: type=1130 audit(1747443092.507:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.609463 kernel: audit: type=1131 audit(1747443092.507:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.331347 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:51:32.615691 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:51:32.355359 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:51:32.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.365052 systemd[1]: Starting ignition-quench.service... May 17 00:51:32.372979 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:51:32.667382 kernel: audit: type=1130 audit(1747443092.628:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.373099 systemd[1]: Finished ignition-quench.service. May 17 00:51:32.387918 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:51:32.434571 systemd[1]: Reached target ignition-complete.target. May 17 00:51:32.471435 systemd[1]: Starting initrd-parse-etc.service... May 17 00:51:32.499007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:51:32.499112 systemd[1]: Finished initrd-parse-etc.service. May 17 00:51:32.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.507931 systemd[1]: Reached target initrd-fs.target. May 17 00:51:32.552444 systemd[1]: Reached target initrd.target. May 17 00:51:32.563907 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:51:32.757591 kernel: audit: type=1131 audit(1747443092.711:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.571238 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:51:32.615967 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:51:32.661422 systemd[1]: Starting initrd-cleanup.service... May 17 00:51:32.679776 systemd[1]: Stopped target nss-lookup.target. May 17 00:51:32.685330 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:51:32.694454 systemd[1]: Stopped target timers.target. May 17 00:51:32.702500 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:51:32.702559 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:51:32.732820 systemd[1]: Stopped target initrd.target. May 17 00:51:32.741709 systemd[1]: Stopped target basic.target. May 17 00:51:32.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.750170 systemd[1]: Stopped target ignition-complete.target. May 17 00:51:32.868195 kernel: audit: type=1131 audit(1747443092.840:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.762504 systemd[1]: Stopped target ignition-diskful.target. May 17 00:51:32.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.771454 systemd[1]: Stopped target initrd-root-device.target. May 17 00:51:32.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.780294 systemd[1]: Stopped target remote-fs.target. May 17 00:51:32.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.789851 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:51:32.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.798604 systemd[1]: Stopped target sysinit.target. May 17 00:51:32.807239 systemd[1]: Stopped target local-fs.target. May 17 00:51:32.815018 systemd[1]: Stopped target local-fs-pre.target. May 17 00:51:32.929232 ignition[1042]: INFO : Ignition 2.14.0 May 17 00:51:32.929232 ignition[1042]: INFO : Stage: umount May 17 00:51:32.929232 ignition[1042]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:51:32.929232 ignition[1042]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:51:32.929232 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:51:32.929232 ignition[1042]: INFO : umount: umount passed May 17 00:51:32.929232 ignition[1042]: INFO : Ignition finished successfully May 17 00:51:32.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.824565 systemd[1]: Stopped target swap.target. May 17 00:51:32.832856 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:51:33.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.832921 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:51:32.840948 systemd[1]: Stopped target cryptsetup.target. May 17 00:51:32.867399 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:51:32.867464 systemd[1]: Stopped dracut-initqueue.service. May 17 00:51:32.872740 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:51:32.872782 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:51:32.882603 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:51:32.882641 systemd[1]: Stopped ignition-files.service. May 17 00:51:33.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.891258 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:51:32.891299 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:51:32.904143 systemd[1]: Stopping ignition-mount.service... May 17 00:51:33.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.914306 systemd[1]: Stopping sysroot-boot.service... May 17 00:51:33.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.922120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:51:33.137000 audit: BPF prog-id=6 op=UNLOAD May 17 00:51:32.922183 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:51:32.931806 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:51:32.931853 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:51:33.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.937013 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:51:33.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.937117 systemd[1]: Finished initrd-cleanup.service. May 17 00:51:33.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.946592 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:51:32.946684 systemd[1]: Stopped ignition-mount.service. May 17 00:51:32.954363 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:51:32.954410 systemd[1]: Stopped ignition-disks.service. May 17 00:51:33.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.965429 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:51:32.965468 systemd[1]: Stopped ignition-kargs.service. May 17 00:51:32.990459 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:51:33.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:32.990518 systemd[1]: Stopped ignition-fetch.service. May 17 00:51:33.255802 kernel: hv_netvsc 002248b4-cabe-0022-48b4-cabe002248b4 eth0: Data path switched from VF: enP33225s1 May 17 00:51:32.998933 systemd[1]: Stopped target network.target. May 17 00:51:33.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.008665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:51:33.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.008720 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:51:33.017096 systemd[1]: Stopped target paths.target. May 17 00:51:33.026235 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:51:33.034651 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:51:33.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.040955 systemd[1]: Stopped target slices.target. May 17 00:51:33.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.051009 systemd[1]: Stopped target sockets.target. May 17 00:51:33.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.060186 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:51:33.060238 systemd[1]: Closed iscsid.socket. May 17 00:51:33.070679 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:51:33.070710 systemd[1]: Closed iscsiuio.socket. May 17 00:51:33.079719 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:51:33.079765 systemd[1]: Stopped ignition-setup.service. May 17 00:51:33.091581 systemd[1]: Stopping systemd-networkd.service... May 17 00:51:33.099886 systemd[1]: Stopping systemd-resolved.service... May 17 00:51:33.106388 systemd-networkd[844]: eth0: DHCPv6 lease lost May 17 00:51:33.359000 audit: BPF prog-id=9 op=UNLOAD May 17 00:51:33.112582 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:51:33.113077 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:51:33.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.113167 systemd[1]: Stopped systemd-networkd.service. May 17 00:51:33.123814 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:51:33.123915 systemd[1]: Stopped systemd-resolved.service. May 17 00:51:33.134502 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:51:33.134547 systemd[1]: Closed systemd-networkd.socket. May 17 00:51:33.147625 systemd[1]: Stopping network-cleanup.service... May 17 00:51:33.157099 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:51:33.157267 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:51:33.167075 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:51:33.167126 systemd[1]: Stopped systemd-sysctl.service. May 17 00:51:33.180559 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:51:33.180606 systemd[1]: Stopped systemd-modules-load.service. May 17 00:51:33.185579 systemd[1]: Stopping systemd-udevd.service... May 17 00:51:33.194139 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:51:33.202631 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:51:33.202794 systemd[1]: Stopped systemd-udevd.service. May 17 00:51:33.212269 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:51:33.212310 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:51:33.220363 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:51:33.220402 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:51:33.230198 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:51:33.230252 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:51:33.238429 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:51:33.238471 systemd[1]: Stopped dracut-cmdline.service. May 17 00:51:33.260172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:51:33.260223 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:51:33.274053 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:51:33.287661 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:51:33.287734 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:51:33.298532 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:51:33.298571 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:51:33.302996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:51:33.303034 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:51:33.313385 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:51:33.313900 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:51:33.313987 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:51:33.365625 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:51:33.365747 systemd[1]: Stopped network-cleanup.service. May 17 00:51:33.578755 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:51:33.578859 systemd[1]: Stopped sysroot-boot.service. May 17 00:51:33.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.587989 systemd[1]: Reached target initrd-switch-root.target. May 17 00:51:33.596271 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:51:33.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:33.596336 systemd[1]: Stopped initrd-setup-root.service. May 17 00:51:33.606005 systemd[1]: Starting initrd-switch-root.service... May 17 00:51:33.659150 systemd[1]: Switching root. May 17 00:51:33.663000 audit: BPF prog-id=5 op=UNLOAD May 17 00:51:33.663000 audit: BPF prog-id=4 op=UNLOAD May 17 00:51:33.663000 audit: BPF prog-id=3 op=UNLOAD May 17 00:51:33.663000 audit: BPF prog-id=8 op=UNLOAD May 17 00:51:33.663000 audit: BPF prog-id=7 op=UNLOAD May 17 00:51:33.686563 iscsid[854]: iscsid shutting down. May 17 00:51:33.689889 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). May 17 00:51:33.689948 systemd-journald[276]: Journal stopped May 17 00:51:46.073424 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:51:46.073447 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:51:46.073457 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:51:46.073468 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:51:46.073476 kernel: SELinux: policy capability open_perms=1 May 17 00:51:46.073484 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:51:46.073494 kernel: SELinux: policy capability always_check_network=0 May 17 00:51:46.073502 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:51:46.073510 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:51:46.073518 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:51:46.073527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:51:46.073536 kernel: kauditd_printk_skb: 39 callbacks suppressed May 17 00:51:46.073545 kernel: audit: type=1403 audit(1747443096.727:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:51:46.073555 systemd[1]: Successfully loaded SELinux policy in 270.676ms. May 17 00:51:46.073566 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.178ms. May 17 00:51:46.073577 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:51:46.073587 systemd[1]: Detected virtualization microsoft. May 17 00:51:46.073596 systemd[1]: Detected architecture arm64. May 17 00:51:46.073605 systemd[1]: Detected first boot. May 17 00:51:46.073617 systemd[1]: Hostname set to . May 17 00:51:46.073626 systemd[1]: Initializing machine ID from random generator. May 17 00:51:46.073635 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:51:46.073646 kernel: audit: type=1400 audit(1747443098.721:87): avc: denied { associate } for pid=1094 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:51:46.073656 kernel: audit: type=1300 audit(1747443098.721:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400010569c a1=4000028b58 a2=4000026a40 a3=32 items=0 ppid=1077 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:46.073666 kernel: audit: type=1327 audit(1747443098.721:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:51:46.073675 kernel: audit: type=1400 audit(1747443098.736:88): avc: denied { associate } for pid=1094 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:51:46.073685 kernel: audit: type=1300 audit(1747443098.736:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000105775 a2=1ed a3=0 items=2 ppid=1077 pid=1094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:46.073695 kernel: audit: type=1307 audit(1747443098.736:88): cwd="/" May 17 00:51:46.073705 kernel: audit: type=1302 audit(1747443098.736:88): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:46.073714 kernel: audit: type=1302 audit(1747443098.736:88): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:46.073723 kernel: audit: type=1327 audit(1747443098.736:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:51:46.073732 systemd[1]: Populated /etc with preset unit settings. May 17 00:51:46.073742 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:51:46.073752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:51:46.073763 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:51:46.073772 systemd[1]: Queued start job for default target multi-user.target. May 17 00:51:46.073781 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:51:46.073791 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:51:46.073800 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:51:46.073810 systemd[1]: Created slice system-getty.slice. May 17 00:51:46.073822 systemd[1]: Created slice system-modprobe.slice. May 17 00:51:46.073832 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:51:46.073842 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:51:46.073851 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:51:46.073861 systemd[1]: Created slice user.slice. May 17 00:51:46.073870 systemd[1]: Started systemd-ask-password-console.path. May 17 00:51:46.073879 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:51:46.073889 systemd[1]: Set up automount boot.automount. May 17 00:51:46.073898 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:51:46.073907 systemd[1]: Reached target integritysetup.target. May 17 00:51:46.073917 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:51:46.073927 systemd[1]: Reached target remote-fs.target. May 17 00:51:46.073936 systemd[1]: Reached target slices.target. May 17 00:51:46.073946 systemd[1]: Reached target swap.target. May 17 00:51:46.073955 systemd[1]: Reached target torcx.target. May 17 00:51:46.073965 systemd[1]: Reached target veritysetup.target. May 17 00:51:46.073974 systemd[1]: Listening on systemd-coredump.socket. May 17 00:51:46.073983 systemd[1]: Listening on systemd-initctl.socket. May 17 00:51:46.073994 kernel: audit: type=1400 audit(1747443105.657:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:51:46.074003 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:51:46.074013 kernel: audit: type=1335 audit(1747443105.657:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:51:46.074023 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:51:46.074032 systemd[1]: Listening on systemd-journald.socket. May 17 00:51:46.074042 systemd[1]: Listening on systemd-networkd.socket. May 17 00:51:46.074051 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:51:46.074062 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:51:46.074071 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:51:46.074081 systemd[1]: Mounting dev-hugepages.mount... May 17 00:51:46.074091 systemd[1]: Mounting dev-mqueue.mount... May 17 00:51:46.074100 systemd[1]: Mounting media.mount... May 17 00:51:46.074109 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:51:46.074120 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:51:46.074130 systemd[1]: Mounting tmp.mount... May 17 00:51:46.074139 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:51:46.074149 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:46.074158 systemd[1]: Starting kmod-static-nodes.service... May 17 00:51:46.074168 systemd[1]: Starting modprobe@configfs.service... May 17 00:51:46.074177 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:46.074186 systemd[1]: Starting modprobe@drm.service... May 17 00:51:46.074196 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:46.074207 systemd[1]: Starting modprobe@fuse.service... May 17 00:51:46.074259 systemd[1]: Starting modprobe@loop.service... May 17 00:51:46.074273 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:51:46.074283 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:51:46.074293 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:51:46.074302 systemd[1]: Starting systemd-journald.service... May 17 00:51:46.074311 kernel: loop: module loaded May 17 00:51:46.090550 systemd[1]: Starting systemd-modules-load.service... May 17 00:51:46.090600 systemd[1]: Starting systemd-network-generator.service... May 17 00:51:46.090618 systemd[1]: Starting systemd-remount-fs.service... May 17 00:51:46.090628 kernel: fuse: init (API version 7.34) May 17 00:51:46.090638 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:51:46.090649 systemd[1]: Mounted dev-hugepages.mount. May 17 00:51:46.090659 systemd[1]: Mounted dev-mqueue.mount. May 17 00:51:46.090668 systemd[1]: Mounted media.mount. May 17 00:51:46.090678 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:51:46.090687 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:51:46.090697 systemd[1]: Mounted tmp.mount. May 17 00:51:46.090708 kernel: audit: type=1305 audit(1747443106.067:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:51:46.090718 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:51:46.090734 systemd-journald[1197]: Journal started May 17 00:51:46.090804 systemd-journald[1197]: Runtime Journal (/run/log/journal/c23bc28aeddd484eaefdd1b9b6936bec) is 8.0M, max 78.5M, 70.5M free. May 17 00:51:45.657000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:51:46.067000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:51:46.129560 kernel: audit: type=1300 audit(1747443106.067:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdc7136c0 a2=4000 a3=1 items=0 ppid=1 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:46.129618 systemd[1]: Started systemd-journald.service. May 17 00:51:46.067000 audit[1197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdc7136c0 a2=4000 a3=1 items=0 ppid=1 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:46.067000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:51:46.137447 kernel: audit: type=1327 audit(1747443106.067:91): proctitle="/usr/lib/systemd/systemd-journald" May 17 00:51:46.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.170207 kernel: audit: type=1130 audit(1747443106.125:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.171777 systemd[1]: Finished kmod-static-nodes.service. May 17 00:51:46.193728 kernel: audit: type=1130 audit(1747443106.170:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.194531 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:51:46.194788 systemd[1]: Finished modprobe@configfs.service. May 17 00:51:46.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.216523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:46.216748 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:46.250103 kernel: audit: type=1130 audit(1747443106.193:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.250175 kernel: audit: type=1130 audit(1747443106.215:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.250195 kernel: audit: type=1131 audit(1747443106.215:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.262226 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:51:46.262434 systemd[1]: Finished modprobe@drm.service. May 17 00:51:46.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.267196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:46.267375 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:46.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.272696 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:51:46.272916 systemd[1]: Finished modprobe@fuse.service. May 17 00:51:46.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.277430 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:46.277630 systemd[1]: Finished modprobe@loop.service. May 17 00:51:46.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.282366 systemd[1]: Finished systemd-modules-load.service. May 17 00:51:46.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.287521 systemd[1]: Finished systemd-network-generator.service. May 17 00:51:46.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.292902 systemd[1]: Finished systemd-remount-fs.service. May 17 00:51:46.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.297972 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:51:46.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.303072 systemd[1]: Reached target network-pre.target. May 17 00:51:46.309102 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:51:46.314760 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:51:46.318787 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:51:46.360035 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:51:46.365435 systemd[1]: Starting systemd-journal-flush.service... May 17 00:51:46.369805 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:46.370942 systemd[1]: Starting systemd-random-seed.service... May 17 00:51:46.375512 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:46.376696 systemd[1]: Starting systemd-sysctl.service... May 17 00:51:46.382516 systemd[1]: Starting systemd-sysusers.service... May 17 00:51:46.387649 systemd[1]: Starting systemd-udev-settle.service... May 17 00:51:46.394022 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:51:46.399778 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:51:46.406975 udevadm[1246]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:51:46.425932 systemd-journald[1197]: Time spent on flushing to /var/log/journal/c23bc28aeddd484eaefdd1b9b6936bec is 12.410ms for 1014 entries. May 17 00:51:46.425932 systemd-journald[1197]: System Journal (/var/log/journal/c23bc28aeddd484eaefdd1b9b6936bec) is 8.0M, max 2.6G, 2.6G free. May 17 00:51:46.520122 systemd-journald[1197]: Received client request to flush runtime journal. May 17 00:51:46.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.459996 systemd[1]: Finished systemd-random-seed.service. May 17 00:51:46.465046 systemd[1]: Reached target first-boot-complete.target. May 17 00:51:46.470488 systemd[1]: Finished systemd-sysctl.service. May 17 00:51:46.521122 systemd[1]: Finished systemd-journal-flush.service. May 17 00:51:46.932755 systemd[1]: Finished systemd-sysusers.service. May 17 00:51:46.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:46.939770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:51:47.299116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:51:47.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:47.707226 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:51:47.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:47.713489 systemd[1]: Starting systemd-udevd.service... May 17 00:51:47.733067 systemd-udevd[1257]: Using default interface naming scheme 'v252'. May 17 00:51:47.901654 systemd[1]: Started systemd-udevd.service. May 17 00:51:47.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:47.912695 systemd[1]: Starting systemd-networkd.service... May 17 00:51:47.943628 systemd[1]: Found device dev-ttyAMA0.device. May 17 00:51:47.982372 systemd[1]: Starting systemd-userdbd.service... May 17 00:51:47.993000 audit[1275]: AVC avc: denied { confidentiality } for pid=1275 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:51:48.008362 kernel: hv_vmbus: registering driver hv_balloon May 17 00:51:48.022516 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:51:48.022592 kernel: hv_balloon: Memory hot add disabled on ARM64 May 17 00:51:47.993000 audit[1275]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab1c7ce8b0 a1=aa2c a2=ffffba6f24b0 a3=aaab1c726010 items=12 ppid=1257 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:47.993000 audit: CWD cwd="/" May 17 00:51:47.993000 audit: PATH item=0 name=(null) inode=6399 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=1 name=(null) inode=11271 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=2 name=(null) inode=11271 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=3 name=(null) inode=11272 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=4 name=(null) inode=11271 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=5 name=(null) inode=11273 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=6 name=(null) inode=11271 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=7 name=(null) inode=11274 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=8 name=(null) inode=11271 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=9 name=(null) inode=11275 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=10 name=(null) inode=11271 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PATH item=11 name=(null) inode=11276 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:47.993000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:51:48.037372 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:51:48.050570 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:51:48.050636 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:51:48.061943 kernel: Console: switching to colour dummy device 80x25 May 17 00:51:48.064390 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:51:48.064451 kernel: hv_vmbus: registering driver hv_utils May 17 00:51:48.064498 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:51:48.070363 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:51:48.071353 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:51:48.387830 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:51:48.406665 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:51:48.417953 systemd[1]: Started systemd-userdbd.service. May 17 00:51:48.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:48.678798 systemd-networkd[1278]: lo: Link UP May 17 00:51:48.678807 systemd-networkd[1278]: lo: Gained carrier May 17 00:51:48.679175 systemd-networkd[1278]: Enumeration completed May 17 00:51:48.679373 systemd[1]: Started systemd-networkd.service. May 17 00:51:48.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:48.692462 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:51:48.710187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:51:48.715100 systemd-networkd[1278]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:51:48.717183 systemd[1]: Finished systemd-udev-settle.service. May 17 00:51:48.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:48.723412 systemd[1]: Starting lvm2-activation-early.service... May 17 00:51:48.770584 kernel: mlx5_core 81c9:00:02.0 enP33225s1: Link up May 17 00:51:48.797592 kernel: hv_netvsc 002248b4-cabe-0022-48b4-cabe002248b4 eth0: Data path switched to VF: enP33225s1 May 17 00:51:48.798477 systemd-networkd[1278]: enP33225s1: Link UP May 17 00:51:48.798724 systemd-networkd[1278]: eth0: Link UP May 17 00:51:48.798788 systemd-networkd[1278]: eth0: Gained carrier May 17 00:51:48.802894 systemd-networkd[1278]: enP33225s1: Gained carrier May 17 00:51:48.808707 systemd-networkd[1278]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:51:49.063719 lvm[1336]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:51:49.118558 systemd[1]: Finished lvm2-activation-early.service. May 17 00:51:49.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.123712 systemd[1]: Reached target cryptsetup.target. May 17 00:51:49.129510 systemd[1]: Starting lvm2-activation.service... May 17 00:51:49.133640 lvm[1338]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:51:49.162603 systemd[1]: Finished lvm2-activation.service. May 17 00:51:49.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.167578 systemd[1]: Reached target local-fs-pre.target. May 17 00:51:49.172319 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:51:49.172348 systemd[1]: Reached target local-fs.target. May 17 00:51:49.176654 systemd[1]: Reached target machines.target. May 17 00:51:49.182217 systemd[1]: Starting ldconfig.service... May 17 00:51:49.186112 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:49.186181 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:49.187368 systemd[1]: Starting systemd-boot-update.service... May 17 00:51:49.192730 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:51:49.199645 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:51:49.205478 systemd[1]: Starting systemd-sysext.service... May 17 00:51:49.269026 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:51:49.269694 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:51:49.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.318189 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1341 (bootctl) May 17 00:51:49.319434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:51:49.354482 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:51:49.359592 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:51:49.359822 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:51:49.394963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:51:49.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.411589 kernel: loop0: detected capacity change from 0 to 203944 May 17 00:51:49.463593 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:51:49.484598 kernel: loop1: detected capacity change from 0 to 203944 May 17 00:51:49.488969 (sd-sysext)[1358]: Using extensions 'kubernetes'. May 17 00:51:49.490521 (sd-sysext)[1358]: Merged extensions into '/usr'. May 17 00:51:49.506296 systemd[1]: Mounting usr-share-oem.mount... May 17 00:51:49.510698 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:49.511930 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:49.517387 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:49.524849 systemd[1]: Starting modprobe@loop.service... May 17 00:51:49.532456 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:49.532723 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:49.535422 systemd[1]: Mounted usr-share-oem.mount. May 17 00:51:49.540041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:49.540308 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:49.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.545412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:49.545665 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:49.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.551119 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:49.551383 systemd[1]: Finished modprobe@loop.service. May 17 00:51:49.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.556491 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:49.556681 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:49.557937 systemd[1]: Finished systemd-sysext.service. May 17 00:51:49.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.564123 systemd[1]: Starting ensure-sysext.service... May 17 00:51:49.569900 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:51:49.583691 systemd[1]: Reloading. May 17 00:51:49.586980 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:51:49.598193 systemd-fsck[1354]: fsck.fat 4.2 (2021-01-31) May 17 00:51:49.598193 systemd-fsck[1354]: /dev/sda1: 236 files, 117182/258078 clusters May 17 00:51:49.620661 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:51:49.637692 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:51:49.639325 /usr/lib/systemd/system-generators/torcx-generator[1392]: time="2025-05-17T00:51:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:51:49.639357 /usr/lib/systemd/system-generators/torcx-generator[1392]: time="2025-05-17T00:51:49Z" level=info msg="torcx already run" May 17 00:51:49.728255 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:51:49.728275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:51:49.745278 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:51:49.806355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:51:49.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.817471 systemd[1]: Mounting boot.mount... May 17 00:51:49.824393 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:49.825699 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:49.831068 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:49.840119 systemd[1]: Starting modprobe@loop.service... May 17 00:51:49.844053 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:49.844190 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:49.845042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:49.845213 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:49.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.850033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:49.850181 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:49.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.855284 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:49.855446 systemd[1]: Finished modprobe@loop.service. May 17 00:51:49.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.862209 systemd[1]: Mounted boot.mount. May 17 00:51:49.868100 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:49.868190 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:49.870074 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:49.871288 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:49.876423 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:49.881988 systemd[1]: Starting modprobe@loop.service... May 17 00:51:49.886079 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:49.886221 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:49.887199 systemd[1]: Finished systemd-boot-update.service. May 17 00:51:49.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.892433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:49.892619 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:49.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.897883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:49.898037 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:49.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.903281 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:49.903490 systemd[1]: Finished modprobe@loop.service. May 17 00:51:49.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.910324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:49.911825 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:49.917669 systemd[1]: Starting modprobe@drm.service... May 17 00:51:49.922517 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:49.927905 systemd[1]: Starting modprobe@loop.service... May 17 00:51:49.931909 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:49.932029 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:49.933500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:49.934646 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:49.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.939770 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:51:49.939949 systemd[1]: Finished modprobe@drm.service. May 17 00:51:49.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.944758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:49.944936 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:49.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.950392 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:49.950581 systemd[1]: Finished modprobe@loop.service. May 17 00:51:49.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:49.955732 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:49.955821 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:49.956983 systemd[1]: Finished ensure-sysext.service. May 17 00:51:49.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.245548 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:51:50.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.252176 systemd[1]: Starting audit-rules.service... May 17 00:51:50.257163 systemd[1]: Starting clean-ca-certificates.service... May 17 00:51:50.262790 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:51:50.269393 systemd[1]: Starting systemd-resolved.service... May 17 00:51:50.275172 systemd[1]: Starting systemd-timesyncd.service... May 17 00:51:50.280681 systemd[1]: Starting systemd-update-utmp.service... May 17 00:51:50.285474 systemd[1]: Finished clean-ca-certificates.service. May 17 00:51:50.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.290639 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:51:50.318000 audit[1497]: SYSTEM_BOOT pid=1497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:51:50.322992 systemd[1]: Finished systemd-update-utmp.service. May 17 00:51:50.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.428525 systemd-resolved[1495]: Positive Trust Anchors: May 17 00:51:50.428539 systemd-resolved[1495]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:51:50.428583 systemd-resolved[1495]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:51:50.449030 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:51:50.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.528329 systemd[1]: Started systemd-timesyncd.service. May 17 00:51:50.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.533403 systemd[1]: Reached target time-set.target. May 17 00:51:50.596253 systemd-resolved[1495]: Using system hostname 'ci-3510.3.7-n-6dc47d205e'. May 17 00:51:50.597763 systemd[1]: Started systemd-resolved.service. May 17 00:51:50.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:50.602223 systemd[1]: Reached target network.target. May 17 00:51:50.606613 systemd[1]: Reached target nss-lookup.target. May 17 00:51:50.672352 augenrules[1515]: No rules May 17 00:51:50.670000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:51:50.670000 audit[1515]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc5bc230 a2=420 a3=0 items=0 ppid=1491 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:50.670000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:51:50.673553 systemd[1]: Finished audit-rules.service. May 17 00:51:50.759078 systemd-timesyncd[1496]: Contacted time server 205.233.73.201:123 (0.flatcar.pool.ntp.org). May 17 00:51:50.759170 systemd-timesyncd[1496]: Initial clock synchronization to Sat 2025-05-17 00:51:50.762336 UTC. May 17 00:51:50.821684 systemd-networkd[1278]: eth0: Gained IPv6LL May 17 00:51:50.823801 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:51:50.829722 systemd[1]: Reached target network-online.target. May 17 00:51:56.921271 ldconfig[1340]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:51:56.934730 systemd[1]: Finished ldconfig.service. May 17 00:51:56.941168 systemd[1]: Starting systemd-update-done.service... May 17 00:51:56.976120 systemd[1]: Finished systemd-update-done.service. May 17 00:51:56.981236 systemd[1]: Reached target sysinit.target. May 17 00:51:56.985925 systemd[1]: Started motdgen.path. May 17 00:51:56.989824 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:51:56.996502 systemd[1]: Started logrotate.timer. May 17 00:51:57.000542 systemd[1]: Started mdadm.timer. May 17 00:51:57.004258 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:51:57.008999 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:51:57.009036 systemd[1]: Reached target paths.target. May 17 00:51:57.013247 systemd[1]: Reached target timers.target. May 17 00:51:57.018153 systemd[1]: Listening on dbus.socket. May 17 00:51:57.023427 systemd[1]: Starting docker.socket... May 17 00:51:57.053852 systemd[1]: Listening on sshd.socket. May 17 00:51:57.058068 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:57.058482 systemd[1]: Listening on docker.socket. May 17 00:51:57.062757 systemd[1]: Reached target sockets.target. May 17 00:51:57.068089 systemd[1]: Reached target basic.target. May 17 00:51:57.072598 systemd[1]: System is tainted: cgroupsv1 May 17 00:51:57.072652 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:51:57.072673 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:51:57.073892 systemd[1]: Starting containerd.service... May 17 00:51:57.078931 systemd[1]: Starting dbus.service... May 17 00:51:57.083407 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:51:57.089011 systemd[1]: Starting extend-filesystems.service... May 17 00:51:57.093364 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:51:57.094698 systemd[1]: Starting kubelet.service... May 17 00:51:57.099390 systemd[1]: Starting motdgen.service... May 17 00:51:57.104055 systemd[1]: Started nvidia.service. May 17 00:51:57.109204 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:51:57.115098 systemd[1]: Starting sshd-keygen.service... May 17 00:51:57.120803 systemd[1]: Starting systemd-logind.service... May 17 00:51:57.125162 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:57.125234 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:51:57.126492 systemd[1]: Starting update-engine.service... May 17 00:51:57.132146 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:51:57.140368 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:51:57.140639 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:51:57.214415 extend-filesystems[1531]: Found loop1 May 17 00:51:57.214415 extend-filesystems[1531]: Found sda May 17 00:51:57.214415 extend-filesystems[1531]: Found sda1 May 17 00:51:57.230640 extend-filesystems[1531]: Found sda2 May 17 00:51:57.230640 extend-filesystems[1531]: Found sda3 May 17 00:51:57.230640 extend-filesystems[1531]: Found usr May 17 00:51:57.230640 extend-filesystems[1531]: Found sda4 May 17 00:51:57.230640 extend-filesystems[1531]: Found sda6 May 17 00:51:57.230640 extend-filesystems[1531]: Found sda7 May 17 00:51:57.230640 extend-filesystems[1531]: Found sda9 May 17 00:51:57.230640 extend-filesystems[1531]: Checking size of /dev/sda9 May 17 00:51:57.225894 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:51:57.318617 jq[1549]: true May 17 00:51:57.318740 jq[1530]: false May 17 00:51:57.226068 systemd-logind[1544]: New seat seat0. May 17 00:51:57.248738 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:51:57.319041 jq[1577]: true May 17 00:51:57.248986 systemd[1]: Finished motdgen.service. May 17 00:51:57.262087 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:51:57.262328 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:51:57.336340 extend-filesystems[1531]: Old size kept for /dev/sda9 May 17 00:51:57.336340 extend-filesystems[1531]: Found sr0 May 17 00:51:57.347792 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:51:57.375893 env[1559]: time="2025-05-17T00:51:57.337767309Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:51:57.348032 systemd[1]: Finished extend-filesystems.service. May 17 00:51:57.403420 env[1559]: time="2025-05-17T00:51:57.403365500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:51:57.404422 env[1559]: time="2025-05-17T00:51:57.404381767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:57.409808 env[1559]: time="2025-05-17T00:51:57.409767861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:51:57.409808 env[1559]: time="2025-05-17T00:51:57.409801706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:57.410707 env[1559]: time="2025-05-17T00:51:57.410679552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:51:57.410756 env[1559]: time="2025-05-17T00:51:57.410707876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:51:57.410756 env[1559]: time="2025-05-17T00:51:57.410724038Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:51:57.410756 env[1559]: time="2025-05-17T00:51:57.410733640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:51:57.410837 env[1559]: time="2025-05-17T00:51:57.410816732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:57.411041 env[1559]: time="2025-05-17T00:51:57.411018921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:57.411200 env[1559]: time="2025-05-17T00:51:57.411170983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:51:57.411200 env[1559]: time="2025-05-17T00:51:57.411191466Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:51:57.411267 env[1559]: time="2025-05-17T00:51:57.411240473Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:51:57.411267 env[1559]: time="2025-05-17T00:51:57.411252674Z" level=info msg="metadata content store policy set" policy=shared May 17 00:51:57.428067 env[1559]: time="2025-05-17T00:51:57.428023566Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:51:57.428067 env[1559]: time="2025-05-17T00:51:57.428067412Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:51:57.428067 env[1559]: time="2025-05-17T00:51:57.428081214Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:51:57.428233 env[1559]: time="2025-05-17T00:51:57.428113778Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428233 env[1559]: time="2025-05-17T00:51:57.428130501Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428233 env[1559]: time="2025-05-17T00:51:57.428145263Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428233 env[1559]: time="2025-05-17T00:51:57.428159385Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428524 env[1559]: time="2025-05-17T00:51:57.428502514Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428590 env[1559]: time="2025-05-17T00:51:57.428525318Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428590 env[1559]: time="2025-05-17T00:51:57.428538880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428590 env[1559]: time="2025-05-17T00:51:57.428552082Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:51:57.428678 env[1559]: time="2025-05-17T00:51:57.428595088Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:51:57.428742 env[1559]: time="2025-05-17T00:51:57.428719666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:51:57.428821 env[1559]: time="2025-05-17T00:51:57.428801077Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429094720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429126604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429140366Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429180372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429192934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429205655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429219137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429230699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429242181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429253222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429264024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429278306Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429385241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429399923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:51:57.429603 env[1559]: time="2025-05-17T00:51:57.429411925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:51:57.430214 env[1559]: time="2025-05-17T00:51:57.429423927Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:51:57.430214 env[1559]: time="2025-05-17T00:51:57.429437729Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:51:57.430214 env[1559]: time="2025-05-17T00:51:57.429448890Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:51:57.430214 env[1559]: time="2025-05-17T00:51:57.429466133Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:51:57.430214 env[1559]: time="2025-05-17T00:51:57.429500578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:51:57.430608 env[1559]: time="2025-05-17T00:51:57.429885153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:51:57.430608 env[1559]: time="2025-05-17T00:51:57.429946522Z" level=info msg="Connect containerd service" May 17 00:51:57.430608 env[1559]: time="2025-05-17T00:51:57.429982087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:51:57.430608 env[1559]: time="2025-05-17T00:51:57.430592575Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430793684Z" level=info msg="Start subscribing containerd event" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430865694Z" level=info msg="Start recovering state" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430947346Z" level=info msg="Start event monitor" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430968229Z" level=info msg="Start snapshots syncer" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430978150Z" level=info msg="Start cni network conf syncer for default" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430986031Z" level=info msg="Start streaming server" May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.430813087Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.431100168Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:51:57.461441 env[1559]: time="2025-05-17T00:51:57.449527897Z" level=info msg="containerd successfully booted in 0.113350s" May 17 00:51:57.431236 systemd[1]: Started containerd.service. May 17 00:51:57.462226 bash[1599]: Updated "/home/core/.ssh/authorized_keys" May 17 00:51:57.462601 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:51:57.469352 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:51:57.521717 dbus-daemon[1529]: [system] SELinux support is enabled May 17 00:51:57.521883 systemd[1]: Started dbus.service. May 17 00:51:57.527345 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:51:57.527371 systemd[1]: Reached target system-config.target. May 17 00:51:57.535686 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:51:57.535713 systemd[1]: Reached target user-config.target. May 17 00:51:57.542533 systemd[1]: Started systemd-logind.service. May 17 00:51:57.977754 systemd[1]: Started kubelet.service. May 17 00:51:57.995039 update_engine[1546]: I0517 00:51:57.979918 1546 main.cc:92] Flatcar Update Engine starting May 17 00:51:58.040978 systemd[1]: Started update-engine.service. May 17 00:51:58.041295 update_engine[1546]: I0517 00:51:58.041046 1546 update_check_scheduler.cc:74] Next update check in 7m5s May 17 00:51:58.049797 systemd[1]: Started locksmithd.service. May 17 00:51:58.367513 kubelet[1643]: E0517 00:51:58.367421 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:51:58.369219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:51:58.369362 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:51:59.291334 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:51:59.506350 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:51:59.523033 systemd[1]: Finished sshd-keygen.service. May 17 00:51:59.529158 systemd[1]: Starting issuegen.service... May 17 00:51:59.533906 systemd[1]: Started waagent.service. May 17 00:51:59.538484 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:51:59.538781 systemd[1]: Finished issuegen.service. May 17 00:51:59.544698 systemd[1]: Starting systemd-user-sessions.service... May 17 00:51:59.584012 systemd[1]: Finished systemd-user-sessions.service. May 17 00:51:59.590961 systemd[1]: Started getty@tty1.service. May 17 00:51:59.596410 systemd[1]: Started serial-getty@ttyAMA0.service. May 17 00:51:59.601400 systemd[1]: Reached target getty.target. May 17 00:51:59.605755 systemd[1]: Reached target multi-user.target. May 17 00:51:59.611728 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:51:59.622730 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:51:59.623049 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:51:59.628884 systemd[1]: Startup finished in 15.370s (kernel) + 23.036s (userspace) = 38.406s. May 17 00:52:00.262272 login[1673]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 00:52:00.262610 login[1672]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:52:00.343642 systemd[1]: Created slice user-500.slice. May 17 00:52:00.344660 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:52:00.348355 systemd-logind[1544]: New session 2 of user core. May 17 00:52:00.387747 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:52:00.389361 systemd[1]: Starting user@500.service... May 17 00:52:00.427025 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:00.615252 systemd[1679]: Queued start job for default target default.target. May 17 00:52:00.615475 systemd[1679]: Reached target paths.target. May 17 00:52:00.615490 systemd[1679]: Reached target sockets.target. May 17 00:52:00.615501 systemd[1679]: Reached target timers.target. May 17 00:52:00.615510 systemd[1679]: Reached target basic.target. May 17 00:52:00.615647 systemd[1]: Started user@500.service. May 17 00:52:00.616502 systemd[1]: Started session-2.scope. May 17 00:52:00.616946 systemd[1679]: Reached target default.target. May 17 00:52:00.616994 systemd[1679]: Startup finished in 183ms. May 17 00:52:01.264310 login[1673]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:52:01.268772 systemd[1]: Started session-1.scope. May 17 00:52:01.268992 systemd-logind[1544]: New session 1 of user core. May 17 00:52:06.291200 waagent[1666]: 2025-05-17T00:52:06.291072Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:52:06.298485 waagent[1666]: 2025-05-17T00:52:06.298389Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:52:06.303385 waagent[1666]: 2025-05-17T00:52:06.303300Z INFO Daemon Daemon Python: 3.9.16 May 17 00:52:06.308269 waagent[1666]: 2025-05-17T00:52:06.308142Z INFO Daemon Daemon Run daemon May 17 00:52:06.312978 waagent[1666]: 2025-05-17T00:52:06.312898Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:52:06.331371 waagent[1666]: 2025-05-17T00:52:06.331228Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:52:06.347057 waagent[1666]: 2025-05-17T00:52:06.346914Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:52:06.357232 waagent[1666]: 2025-05-17T00:52:06.357140Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:52:06.362660 waagent[1666]: 2025-05-17T00:52:06.362548Z INFO Daemon Daemon Using waagent for provisioning May 17 00:52:06.368666 waagent[1666]: 2025-05-17T00:52:06.368550Z INFO Daemon Daemon Activate resource disk May 17 00:52:06.373605 waagent[1666]: 2025-05-17T00:52:06.373500Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:52:06.388558 waagent[1666]: 2025-05-17T00:52:06.388464Z INFO Daemon Daemon Found device: None May 17 00:52:06.393430 waagent[1666]: 2025-05-17T00:52:06.393337Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:52:06.402238 waagent[1666]: 2025-05-17T00:52:06.402144Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:52:06.414597 waagent[1666]: 2025-05-17T00:52:06.414493Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:52:06.420731 waagent[1666]: 2025-05-17T00:52:06.420648Z INFO Daemon Daemon Running default provisioning handler May 17 00:52:06.434929 waagent[1666]: 2025-05-17T00:52:06.434787Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:52:06.450858 waagent[1666]: 2025-05-17T00:52:06.450713Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:52:06.461516 waagent[1666]: 2025-05-17T00:52:06.461420Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:52:06.466811 waagent[1666]: 2025-05-17T00:52:06.466720Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:52:06.567420 waagent[1666]: 2025-05-17T00:52:06.567196Z INFO Daemon Daemon Successfully mounted dvd May 17 00:52:06.648920 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:52:06.707116 waagent[1666]: 2025-05-17T00:52:06.706950Z INFO Daemon Daemon Detect protocol endpoint May 17 00:52:06.712347 waagent[1666]: 2025-05-17T00:52:06.712235Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:52:06.718507 waagent[1666]: 2025-05-17T00:52:06.718392Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:52:06.725471 waagent[1666]: 2025-05-17T00:52:06.725365Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:52:06.731164 waagent[1666]: 2025-05-17T00:52:06.731077Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:52:06.736804 waagent[1666]: 2025-05-17T00:52:06.736722Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:52:06.900217 waagent[1666]: 2025-05-17T00:52:06.900070Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:52:06.907312 waagent[1666]: 2025-05-17T00:52:06.907257Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:52:06.912897 waagent[1666]: 2025-05-17T00:52:06.912814Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:52:07.738000 waagent[1666]: 2025-05-17T00:52:07.737824Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:52:07.753147 waagent[1666]: 2025-05-17T00:52:07.753061Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:52:07.759257 waagent[1666]: 2025-05-17T00:52:07.759170Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:52:07.955840 waagent[1666]: 2025-05-17T00:52:07.955689Z INFO Daemon Daemon Found private key matching thumbprint 546A864CBFC542309AF173B36445AE94B63B4C76 May 17 00:52:07.964727 waagent[1666]: 2025-05-17T00:52:07.964635Z INFO Daemon Daemon Certificate with thumbprint 21EA95869B2CEC8639F9881895296742AB45D715 has no matching private key. May 17 00:52:07.974792 waagent[1666]: 2025-05-17T00:52:07.974707Z INFO Daemon Daemon Fetch goal state completed May 17 00:52:08.037128 waagent[1666]: 2025-05-17T00:52:08.037028Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 86912ca9-b39d-495c-b2d7-872ac085f84e New eTag: 3861911760049486133] May 17 00:52:08.048684 waagent[1666]: 2025-05-17T00:52:08.048596Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:52:08.101444 waagent[1666]: 2025-05-17T00:52:08.101343Z INFO Daemon Daemon Starting provisioning May 17 00:52:08.106928 waagent[1666]: 2025-05-17T00:52:08.106838Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:52:08.112078 waagent[1666]: 2025-05-17T00:52:08.112002Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-6dc47d205e] May 17 00:52:08.185153 waagent[1666]: 2025-05-17T00:52:08.185005Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-6dc47d205e] May 17 00:52:08.194297 waagent[1666]: 2025-05-17T00:52:08.194198Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:52:08.201609 waagent[1666]: 2025-05-17T00:52:08.201498Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:52:08.219162 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:52:08.219393 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:52:08.219446 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:52:08.219655 systemd[1]: Stopping systemd-networkd.service... May 17 00:52:08.223624 systemd-networkd[1278]: eth0: DHCPv6 lease lost May 17 00:52:08.225308 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:52:08.225561 systemd[1]: Stopped systemd-networkd.service. May 17 00:52:08.227771 systemd[1]: Starting systemd-networkd.service... May 17 00:52:08.263034 systemd-networkd[1725]: enP33225s1: Link UP May 17 00:52:08.263329 systemd-networkd[1725]: enP33225s1: Gained carrier May 17 00:52:08.264296 systemd-networkd[1725]: eth0: Link UP May 17 00:52:08.264375 systemd-networkd[1725]: eth0: Gained carrier May 17 00:52:08.264801 systemd-networkd[1725]: lo: Link UP May 17 00:52:08.264882 systemd-networkd[1725]: lo: Gained carrier May 17 00:52:08.265186 systemd-networkd[1725]: eth0: Gained IPv6LL May 17 00:52:08.266371 systemd-networkd[1725]: Enumeration completed May 17 00:52:08.266666 systemd[1]: Started systemd-networkd.service. May 17 00:52:08.268556 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:52:08.274026 waagent[1666]: 2025-05-17T00:52:08.271419Z INFO Daemon Daemon Create user account if not exists May 17 00:52:08.278790 systemd-networkd[1725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:52:08.278964 waagent[1666]: 2025-05-17T00:52:08.278862Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:52:08.285148 waagent[1666]: 2025-05-17T00:52:08.285009Z INFO Daemon Daemon Configure sudoer May 17 00:52:08.290491 waagent[1666]: 2025-05-17T00:52:08.290369Z INFO Daemon Daemon Configure sshd May 17 00:52:08.295064 waagent[1666]: 2025-05-17T00:52:08.294964Z INFO Daemon Daemon Deploy ssh public key. May 17 00:52:08.306206 systemd-networkd[1725]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:52:08.310806 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:52:08.471395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:52:08.471584 systemd[1]: Stopped kubelet.service. May 17 00:52:08.473056 systemd[1]: Starting kubelet.service... May 17 00:52:08.569078 systemd[1]: Started kubelet.service. May 17 00:52:08.682476 kubelet[1743]: E0517 00:52:08.682417 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:52:08.684879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:52:08.685034 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:52:09.512605 waagent[1666]: 2025-05-17T00:52:09.508658Z INFO Daemon Daemon Provisioning complete May 17 00:52:09.526309 waagent[1666]: 2025-05-17T00:52:09.526242Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:52:09.532754 waagent[1666]: 2025-05-17T00:52:09.532671Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:52:09.543730 waagent[1666]: 2025-05-17T00:52:09.543650Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:52:09.870245 waagent[1750]: 2025-05-17T00:52:09.870094Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:52:09.871360 waagent[1750]: 2025-05-17T00:52:09.871303Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:52:09.871628 waagent[1750]: 2025-05-17T00:52:09.871556Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:52:09.884356 waagent[1750]: 2025-05-17T00:52:09.884274Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:52:09.884704 waagent[1750]: 2025-05-17T00:52:09.884655Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:52:09.955812 waagent[1750]: 2025-05-17T00:52:09.955670Z INFO ExtHandler ExtHandler Found private key matching thumbprint 546A864CBFC542309AF173B36445AE94B63B4C76 May 17 00:52:09.956160 waagent[1750]: 2025-05-17T00:52:09.956110Z INFO ExtHandler ExtHandler Certificate with thumbprint 21EA95869B2CEC8639F9881895296742AB45D715 has no matching private key. May 17 00:52:09.956466 waagent[1750]: 2025-05-17T00:52:09.956417Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:52:09.971037 waagent[1750]: 2025-05-17T00:52:09.970981Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 07ec1dbf-a791-43b8-9b0d-7fe207c7284d New eTag: 3861911760049486133] May 17 00:52:09.971777 waagent[1750]: 2025-05-17T00:52:09.971721Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:52:10.061975 waagent[1750]: 2025-05-17T00:52:10.061828Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:52:10.086138 waagent[1750]: 2025-05-17T00:52:10.086048Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1750 May 17 00:52:10.090215 waagent[1750]: 2025-05-17T00:52:10.090127Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:52:10.091801 waagent[1750]: 2025-05-17T00:52:10.091730Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:52:10.196091 waagent[1750]: 2025-05-17T00:52:10.195983Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:52:10.196860 waagent[1750]: 2025-05-17T00:52:10.196791Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:52:10.204949 waagent[1750]: 2025-05-17T00:52:10.204874Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:52:10.205670 waagent[1750]: 2025-05-17T00:52:10.205612Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:52:10.207005 waagent[1750]: 2025-05-17T00:52:10.206945Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:52:10.208525 waagent[1750]: 2025-05-17T00:52:10.208448Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:52:10.208936 waagent[1750]: 2025-05-17T00:52:10.208857Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:52:10.209405 waagent[1750]: 2025-05-17T00:52:10.209335Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:52:10.210006 waagent[1750]: 2025-05-17T00:52:10.209942Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:52:10.210326 waagent[1750]: 2025-05-17T00:52:10.210270Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:52:10.210326 waagent[1750]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:52:10.210326 waagent[1750]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:52:10.210326 waagent[1750]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:52:10.210326 waagent[1750]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:52:10.210326 waagent[1750]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:52:10.210326 waagent[1750]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:52:10.212549 waagent[1750]: 2025-05-17T00:52:10.212386Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:52:10.213410 waagent[1750]: 2025-05-17T00:52:10.213338Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:52:10.213629 waagent[1750]: 2025-05-17T00:52:10.213546Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:52:10.214234 waagent[1750]: 2025-05-17T00:52:10.214168Z INFO EnvHandler ExtHandler Configure routes May 17 00:52:10.214399 waagent[1750]: 2025-05-17T00:52:10.214350Z INFO EnvHandler ExtHandler Gateway:None May 17 00:52:10.214517 waagent[1750]: 2025-05-17T00:52:10.214475Z INFO EnvHandler ExtHandler Routes:None May 17 00:52:10.215483 waagent[1750]: 2025-05-17T00:52:10.215423Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:52:10.215657 waagent[1750]: 2025-05-17T00:52:10.215560Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:52:10.216407 waagent[1750]: 2025-05-17T00:52:10.216314Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:52:10.216604 waagent[1750]: 2025-05-17T00:52:10.216520Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:52:10.216917 waagent[1750]: 2025-05-17T00:52:10.216839Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:52:10.228667 waagent[1750]: 2025-05-17T00:52:10.228584Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:52:10.230270 waagent[1750]: 2025-05-17T00:52:10.230211Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:52:10.231371 waagent[1750]: 2025-05-17T00:52:10.231317Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:52:10.262542 waagent[1750]: 2025-05-17T00:52:10.262402Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1725' May 17 00:52:10.279047 waagent[1750]: 2025-05-17T00:52:10.278980Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:52:10.359032 waagent[1750]: 2025-05-17T00:52:10.358874Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:52:10.359032 waagent[1750]: Executing ['ip', '-a', '-o', 'link']: May 17 00:52:10.359032 waagent[1750]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:52:10.359032 waagent[1750]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:ca:be brd ff:ff:ff:ff:ff:ff May 17 00:52:10.359032 waagent[1750]: 3: enP33225s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:ca:be brd ff:ff:ff:ff:ff:ff\ altname enP33225p0s2 May 17 00:52:10.359032 waagent[1750]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:52:10.359032 waagent[1750]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:52:10.359032 waagent[1750]: 2: eth0 inet 10.200.20.39/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:52:10.359032 waagent[1750]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:52:10.359032 waagent[1750]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:52:10.359032 waagent[1750]: 2: eth0 inet6 fe80::222:48ff:feb4:cabe/64 scope link \ valid_lft forever preferred_lft forever May 17 00:52:10.592054 waagent[1750]: 2025-05-17T00:52:10.591991Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:52:11.547953 waagent[1666]: 2025-05-17T00:52:11.547832Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:52:11.552715 waagent[1666]: 2025-05-17T00:52:11.552653Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:52:12.847534 waagent[1784]: 2025-05-17T00:52:12.847426Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:52:12.849648 waagent[1784]: 2025-05-17T00:52:12.849551Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:52:12.849926 waagent[1784]: 2025-05-17T00:52:12.849879Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:52:12.850135 waagent[1784]: 2025-05-17T00:52:12.850092Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 17 00:52:12.865065 waagent[1784]: 2025-05-17T00:52:12.864928Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:52:12.865764 waagent[1784]: 2025-05-17T00:52:12.865702Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:52:12.866036 waagent[1784]: 2025-05-17T00:52:12.865988Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:52:12.866353 waagent[1784]: 2025-05-17T00:52:12.866303Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:52:12.880691 waagent[1784]: 2025-05-17T00:52:12.880595Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:52:12.889738 waagent[1784]: 2025-05-17T00:52:12.889674Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 17 00:52:12.890998 waagent[1784]: 2025-05-17T00:52:12.890941Z INFO ExtHandler May 17 00:52:12.891241 waagent[1784]: 2025-05-17T00:52:12.891194Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 80e5e35f-beba-4759-bfd5-4747cf12bb0b eTag: 3861911760049486133 source: Fabric] May 17 00:52:12.892113 waagent[1784]: 2025-05-17T00:52:12.892057Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:52:12.893423 waagent[1784]: 2025-05-17T00:52:12.893366Z INFO ExtHandler May 17 00:52:12.893661 waagent[1784]: 2025-05-17T00:52:12.893613Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:52:12.900769 waagent[1784]: 2025-05-17T00:52:12.900711Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:52:12.901442 waagent[1784]: 2025-05-17T00:52:12.901395Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:52:12.921817 waagent[1784]: 2025-05-17T00:52:12.921753Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:52:12.999622 waagent[1784]: 2025-05-17T00:52:12.999470Z INFO ExtHandler Downloaded certificate {'thumbprint': '546A864CBFC542309AF173B36445AE94B63B4C76', 'hasPrivateKey': True} May 17 00:52:13.000988 waagent[1784]: 2025-05-17T00:52:13.000926Z INFO ExtHandler Downloaded certificate {'thumbprint': '21EA95869B2CEC8639F9881895296742AB45D715', 'hasPrivateKey': False} May 17 00:52:13.002247 waagent[1784]: 2025-05-17T00:52:13.002188Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:52:13.003276 waagent[1784]: 2025-05-17T00:52:13.003218Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:52:13.024679 waagent[1784]: 2025-05-17T00:52:13.024507Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:52:13.035490 waagent[1784]: 2025-05-17T00:52:13.035360Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:52:13.039893 waagent[1784]: 2025-05-17T00:52:13.039769Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:52:13.040292 waagent[1784]: 2025-05-17T00:52:13.040242Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:52:13.209397 waagent[1784]: 2025-05-17T00:52:13.209185Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:52:13.209397 waagent[1784]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:52:13.209397 waagent[1784]: pkts bytes target prot opt in out source destination May 17 00:52:13.209397 waagent[1784]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:52:13.209397 waagent[1784]: pkts bytes target prot opt in out source destination May 17 00:52:13.209397 waagent[1784]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:52:13.209397 waagent[1784]: pkts bytes target prot opt in out source destination May 17 00:52:13.209397 waagent[1784]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:52:13.209397 waagent[1784]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:52:13.209397 waagent[1784]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:52:13.214245 waagent[1784]: 2025-05-17T00:52:13.214162Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:52:13.217774 waagent[1784]: 2025-05-17T00:52:13.217616Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:52:13.218098 waagent[1784]: 2025-05-17T00:52:13.218029Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:52:13.218482 waagent[1784]: 2025-05-17T00:52:13.218424Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:52:13.227080 waagent[1784]: 2025-05-17T00:52:13.227010Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:52:13.227747 waagent[1784]: 2025-05-17T00:52:13.227682Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:52:13.236657 waagent[1784]: 2025-05-17T00:52:13.236538Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1784 May 17 00:52:13.240100 waagent[1784]: 2025-05-17T00:52:13.240012Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:52:13.241020 waagent[1784]: 2025-05-17T00:52:13.240960Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:52:13.241958 waagent[1784]: 2025-05-17T00:52:13.241893Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:52:13.244873 waagent[1784]: 2025-05-17T00:52:13.244795Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:52:13.246330 waagent[1784]: 2025-05-17T00:52:13.246256Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:52:13.247004 waagent[1784]: 2025-05-17T00:52:13.246941Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:52:13.247260 waagent[1784]: 2025-05-17T00:52:13.247212Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:52:13.247960 waagent[1784]: 2025-05-17T00:52:13.247903Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:52:13.248367 waagent[1784]: 2025-05-17T00:52:13.248315Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:52:13.248367 waagent[1784]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:52:13.248367 waagent[1784]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:52:13.248367 waagent[1784]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:52:13.248367 waagent[1784]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:52:13.248367 waagent[1784]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:52:13.248367 waagent[1784]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:52:13.251254 waagent[1784]: 2025-05-17T00:52:13.251129Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:52:13.254066 waagent[1784]: 2025-05-17T00:52:13.252307Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:52:13.254716 waagent[1784]: 2025-05-17T00:52:13.254611Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:52:13.255325 waagent[1784]: 2025-05-17T00:52:13.255248Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:52:13.255997 waagent[1784]: 2025-05-17T00:52:13.255932Z INFO EnvHandler ExtHandler Configure routes May 17 00:52:13.256137 waagent[1784]: 2025-05-17T00:52:13.256065Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:52:13.257104 waagent[1784]: 2025-05-17T00:52:13.257006Z INFO EnvHandler ExtHandler Gateway:None May 17 00:52:13.257402 waagent[1784]: 2025-05-17T00:52:13.257348Z INFO EnvHandler ExtHandler Routes:None May 17 00:52:13.261089 waagent[1784]: 2025-05-17T00:52:13.261005Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:52:13.261206 waagent[1784]: 2025-05-17T00:52:13.261152Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:52:13.264969 waagent[1784]: 2025-05-17T00:52:13.264881Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:52:13.264969 waagent[1784]: Executing ['ip', '-a', '-o', 'link']: May 17 00:52:13.264969 waagent[1784]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:52:13.264969 waagent[1784]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:ca:be brd ff:ff:ff:ff:ff:ff May 17 00:52:13.264969 waagent[1784]: 3: enP33225s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:ca:be brd ff:ff:ff:ff:ff:ff\ altname enP33225p0s2 May 17 00:52:13.264969 waagent[1784]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:52:13.264969 waagent[1784]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:52:13.264969 waagent[1784]: 2: eth0 inet 10.200.20.39/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:52:13.264969 waagent[1784]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:52:13.264969 waagent[1784]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:52:13.264969 waagent[1784]: 2: eth0 inet6 fe80::222:48ff:feb4:cabe/64 scope link \ valid_lft forever preferred_lft forever May 17 00:52:13.265304 waagent[1784]: 2025-05-17T00:52:13.265196Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:52:13.286879 waagent[1784]: 2025-05-17T00:52:13.286791Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:52:13.302043 waagent[1784]: 2025-05-17T00:52:13.301962Z INFO ExtHandler ExtHandler May 17 00:52:13.303118 waagent[1784]: 2025-05-17T00:52:13.303056Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ab8a42c2-f99c-42b6-9abd-c184516970d0 correlation 467674a3-5544-448e-97db-feb9b229f557 created: 2025-05-17T00:50:37.687447Z] May 17 00:52:13.305427 waagent[1784]: 2025-05-17T00:52:13.305345Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:52:13.308968 waagent[1784]: 2025-05-17T00:52:13.308895Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] May 17 00:52:13.328176 waagent[1784]: 2025-05-17T00:52:13.328028Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:52:13.332637 waagent[1784]: 2025-05-17T00:52:13.332536Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:52:13.340369 waagent[1784]: 2025-05-17T00:52:13.340244Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D61936E5-A642-4711-9D26-46369406524B;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:52:13.351515 waagent[1784]: 2025-05-17T00:52:13.351432Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:52:18.721378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:52:18.721553 systemd[1]: Stopped kubelet.service. May 17 00:52:18.722983 systemd[1]: Starting kubelet.service... May 17 00:52:18.814938 systemd[1]: Started kubelet.service. May 17 00:52:18.922484 kubelet[1835]: E0517 00:52:18.922430 1835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:52:18.924226 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:52:18.924379 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:52:24.660022 systemd[1]: Created slice system-sshd.slice. May 17 00:52:24.661203 systemd[1]: Started sshd@0-10.200.20.39:22-10.200.16.10:59500.service. May 17 00:52:25.344119 sshd[1842]: Accepted publickey for core from 10.200.16.10 port 59500 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:25.362254 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:25.366658 systemd[1]: Started session-3.scope. May 17 00:52:25.367619 systemd-logind[1544]: New session 3 of user core. May 17 00:52:25.754623 systemd[1]: Started sshd@1-10.200.20.39:22-10.200.16.10:59508.service. May 17 00:52:26.240269 sshd[1847]: Accepted publickey for core from 10.200.16.10 port 59508 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:26.241627 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:26.245511 systemd-logind[1544]: New session 4 of user core. May 17 00:52:26.245985 systemd[1]: Started session-4.scope. May 17 00:52:26.589718 sshd[1847]: pam_unix(sshd:session): session closed for user core May 17 00:52:26.592969 systemd[1]: sshd@1-10.200.20.39:22-10.200.16.10:59508.service: Deactivated successfully. May 17 00:52:26.593718 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:52:26.594659 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. May 17 00:52:26.595421 systemd-logind[1544]: Removed session 4. May 17 00:52:26.667199 systemd[1]: Started sshd@2-10.200.20.39:22-10.200.16.10:59520.service. May 17 00:52:27.143248 sshd[1854]: Accepted publickey for core from 10.200.16.10 port 59520 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:27.144884 sshd[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:27.149139 systemd[1]: Started session-5.scope. May 17 00:52:27.149769 systemd-logind[1544]: New session 5 of user core. May 17 00:52:27.500009 sshd[1854]: pam_unix(sshd:session): session closed for user core May 17 00:52:27.502957 systemd[1]: sshd@2-10.200.20.39:22-10.200.16.10:59520.service: Deactivated successfully. May 17 00:52:27.503692 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:52:27.504788 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. May 17 00:52:27.505494 systemd-logind[1544]: Removed session 5. May 17 00:52:27.579015 systemd[1]: Started sshd@3-10.200.20.39:22-10.200.16.10:59528.service. May 17 00:52:28.061102 sshd[1861]: Accepted publickey for core from 10.200.16.10 port 59528 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:28.062661 sshd[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:28.066262 systemd-logind[1544]: New session 6 of user core. May 17 00:52:28.066694 systemd[1]: Started session-6.scope. May 17 00:52:28.409396 sshd[1861]: pam_unix(sshd:session): session closed for user core May 17 00:52:28.412661 systemd[1]: sshd@3-10.200.20.39:22-10.200.16.10:59528.service: Deactivated successfully. May 17 00:52:28.414203 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:52:28.414855 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. May 17 00:52:28.415789 systemd-logind[1544]: Removed session 6. May 17 00:52:28.482286 systemd[1]: Started sshd@4-10.200.20.39:22-10.200.16.10:56850.service. May 17 00:52:28.928492 sshd[1868]: Accepted publickey for core from 10.200.16.10 port 56850 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:28.930149 sshd[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:28.930972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:52:28.931175 systemd[1]: Stopped kubelet.service. May 17 00:52:28.932720 systemd[1]: Starting kubelet.service... May 17 00:52:28.936056 systemd[1]: Started session-7.scope. May 17 00:52:28.937469 systemd-logind[1544]: New session 7 of user core. May 17 00:52:29.032462 systemd[1]: Started kubelet.service. May 17 00:52:29.068282 kubelet[1880]: E0517 00:52:29.068220 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:52:29.069980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:52:29.070123 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:52:29.809226 sudo[1886]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:52:29.809791 sudo[1886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:52:29.821915 systemd[1]: Starting coreos-metadata.service... May 17 00:52:29.885459 coreos-metadata[1890]: May 17 00:52:29.885 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:52:29.890927 coreos-metadata[1890]: May 17 00:52:29.890 INFO Fetch successful May 17 00:52:29.891096 coreos-metadata[1890]: May 17 00:52:29.891 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 17 00:52:29.892852 coreos-metadata[1890]: May 17 00:52:29.892 INFO Fetch successful May 17 00:52:29.893159 coreos-metadata[1890]: May 17 00:52:29.893 INFO Fetching http://168.63.129.16/machine/0c4d3dd5-f955-4268-9fe3-e6433a41c9af/357e9be2%2Db716%2D4d43%2Da6b3%2Df26013b52c2c.%5Fci%2D3510.3.7%2Dn%2D6dc47d205e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 17 00:52:29.894993 coreos-metadata[1890]: May 17 00:52:29.894 INFO Fetch successful May 17 00:52:29.928211 coreos-metadata[1890]: May 17 00:52:29.928 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 17 00:52:29.938526 coreos-metadata[1890]: May 17 00:52:29.938 INFO Fetch successful May 17 00:52:29.949391 systemd[1]: Finished coreos-metadata.service. May 17 00:52:30.349373 systemd[1]: Stopped kubelet.service. May 17 00:52:30.351455 systemd[1]: Starting kubelet.service... May 17 00:52:30.375785 systemd[1]: Reloading. May 17 00:52:30.452979 /usr/lib/systemd/system-generators/torcx-generator[1950]: time="2025-05-17T00:52:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:52:30.453010 /usr/lib/systemd/system-generators/torcx-generator[1950]: time="2025-05-17T00:52:30Z" level=info msg="torcx already run" May 17 00:52:30.533786 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:52:30.533949 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:52:30.551207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:52:30.647113 systemd[1]: Started kubelet.service. May 17 00:52:30.651014 systemd[1]: Stopping kubelet.service... May 17 00:52:30.651877 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:52:30.652118 systemd[1]: Stopped kubelet.service. May 17 00:52:30.654695 systemd[1]: Starting kubelet.service... May 17 00:52:30.931824 systemd[1]: Started kubelet.service. May 17 00:52:30.972522 kubelet[2031]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:52:30.972522 kubelet[2031]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:52:30.972522 kubelet[2031]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:52:30.972911 kubelet[2031]: I0517 00:52:30.972602 2031 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:52:31.643953 kubelet[2031]: I0517 00:52:31.643910 2031 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:52:31.643953 kubelet[2031]: I0517 00:52:31.643945 2031 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:52:31.644206 kubelet[2031]: I0517 00:52:31.644188 2031 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:52:31.662757 kubelet[2031]: I0517 00:52:31.662716 2031 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:52:31.670957 kubelet[2031]: E0517 00:52:31.670909 2031 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:52:31.670957 kubelet[2031]: I0517 00:52:31.670953 2031 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:52:31.674657 kubelet[2031]: I0517 00:52:31.674635 2031 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:52:31.674910 kubelet[2031]: I0517 00:52:31.674885 2031 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:52:31.675012 kubelet[2031]: I0517 00:52:31.674980 2031 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:52:31.675197 kubelet[2031]: I0517 00:52:31.675010 2031 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:52:31.675197 kubelet[2031]: I0517 00:52:31.675196 2031 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:52:31.675330 kubelet[2031]: I0517 00:52:31.675205 2031 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:52:31.675330 kubelet[2031]: I0517 00:52:31.675303 2031 state_mem.go:36] "Initialized new in-memory state store" May 17 00:52:31.678716 kubelet[2031]: I0517 00:52:31.678685 2031 kubelet.go:408] "Attempting to sync node with API server" May 17 00:52:31.678824 kubelet[2031]: I0517 00:52:31.678736 2031 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:52:31.678824 kubelet[2031]: I0517 00:52:31.678773 2031 kubelet.go:314] "Adding apiserver pod source" May 17 00:52:31.678824 kubelet[2031]: I0517 00:52:31.678789 2031 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:52:31.682437 kubelet[2031]: E0517 00:52:31.682376 2031 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:31.682437 kubelet[2031]: E0517 00:52:31.682426 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:31.683416 kubelet[2031]: I0517 00:52:31.683401 2031 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:52:31.684468 kubelet[2031]: I0517 00:52:31.684452 2031 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:52:31.684608 kubelet[2031]: W0517 00:52:31.684597 2031 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:52:31.685232 kubelet[2031]: I0517 00:52:31.685217 2031 server.go:1274] "Started kubelet" May 17 00:52:31.688536 kubelet[2031]: I0517 00:52:31.688500 2031 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:52:31.689289 kubelet[2031]: I0517 00:52:31.689241 2031 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:52:31.689703 kubelet[2031]: I0517 00:52:31.689684 2031 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:52:31.689822 kubelet[2031]: I0517 00:52:31.689319 2031 server.go:449] "Adding debug handlers to kubelet server" May 17 00:52:31.699315 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:52:31.699619 kubelet[2031]: I0517 00:52:31.699594 2031 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:52:31.703868 kubelet[2031]: E0517 00:52:31.702805 2031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.39.18402a49c16292df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.39,UID:10.200.20.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.39,},FirstTimestamp:2025-05-17 00:52:31.685194463 +0000 UTC m=+0.748544626,LastTimestamp:2025-05-17 00:52:31.685194463 +0000 UTC m=+0.748544626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.39,}" May 17 00:52:31.704084 kubelet[2031]: W0517 00:52:31.704064 2031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.200.20.39" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 17 00:52:31.704177 kubelet[2031]: E0517 00:52:31.704161 2031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.200.20.39\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:52:31.704348 kubelet[2031]: I0517 00:52:31.704320 2031 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:52:31.704437 kubelet[2031]: W0517 00:52:31.704423 2031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 17 00:52:31.704515 kubelet[2031]: E0517 00:52:31.704500 2031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:52:31.706384 kubelet[2031]: I0517 00:52:31.706347 2031 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:52:31.706611 kubelet[2031]: E0517 00:52:31.706562 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:31.706962 kubelet[2031]: E0517 00:52:31.706903 2031 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:52:31.707510 kubelet[2031]: I0517 00:52:31.707485 2031 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:52:31.709905 kubelet[2031]: I0517 00:52:31.709888 2031 factory.go:221] Registration of the containerd container factory successfully May 17 00:52:31.710010 kubelet[2031]: I0517 00:52:31.709998 2031 factory.go:221] Registration of the systemd container factory successfully May 17 00:52:31.710816 kubelet[2031]: I0517 00:52:31.710785 2031 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:52:31.710887 kubelet[2031]: I0517 00:52:31.710849 2031 reconciler.go:26] "Reconciler: start to sync state" May 17 00:52:31.732102 kubelet[2031]: E0517 00:52:31.732059 2031 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.39\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 17 00:52:31.732324 kubelet[2031]: E0517 00:52:31.732212 2031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.39.18402a49c23eaa96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.39,UID:10.200.20.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:10.200.20.39,},FirstTimestamp:2025-05-17 00:52:31.699618454 +0000 UTC m=+0.762968577,LastTimestamp:2025-05-17 00:52:31.699618454 +0000 UTC m=+0.762968577,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.39,}" May 17 00:52:31.735800 kubelet[2031]: I0517 00:52:31.735774 2031 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:52:31.735914 kubelet[2031]: I0517 00:52:31.735842 2031 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:52:31.735914 kubelet[2031]: I0517 00:52:31.735861 2031 state_mem.go:36] "Initialized new in-memory state store" May 17 00:52:31.741019 kubelet[2031]: I0517 00:52:31.740990 2031 policy_none.go:49] "None policy: Start" May 17 00:52:31.741751 kubelet[2031]: I0517 00:52:31.741732 2031 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:52:31.741823 kubelet[2031]: I0517 00:52:31.741760 2031 state_mem.go:35] "Initializing new in-memory state store" May 17 00:52:31.748729 kubelet[2031]: I0517 00:52:31.748705 2031 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:52:31.748966 kubelet[2031]: I0517 00:52:31.748954 2031 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:52:31.749057 kubelet[2031]: I0517 00:52:31.749020 2031 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:52:31.750505 kubelet[2031]: I0517 00:52:31.750483 2031 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:52:31.752014 kubelet[2031]: E0517 00:52:31.751991 2031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.39\" not found" May 17 00:52:31.777508 kubelet[2031]: I0517 00:52:31.777470 2031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:52:31.778588 kubelet[2031]: I0517 00:52:31.778547 2031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:52:31.778676 kubelet[2031]: I0517 00:52:31.778595 2031 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:52:31.778676 kubelet[2031]: I0517 00:52:31.778614 2031 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:52:31.778676 kubelet[2031]: E0517 00:52:31.778656 2031 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:52:31.850501 kubelet[2031]: I0517 00:52:31.850458 2031 kubelet_node_status.go:72] "Attempting to register node" node="10.200.20.39" May 17 00:52:31.855601 kubelet[2031]: I0517 00:52:31.855577 2031 kubelet_node_status.go:75] "Successfully registered node" node="10.200.20.39" May 17 00:52:31.855685 kubelet[2031]: E0517 00:52:31.855607 2031 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.200.20.39\": node \"10.200.20.39\" not found" May 17 00:52:31.877908 kubelet[2031]: E0517 00:52:31.877877 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:31.963390 sudo[1886]: pam_unix(sudo:session): session closed for user root May 17 00:52:31.978594 kubelet[2031]: E0517 00:52:31.978556 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.061790 sshd[1868]: pam_unix(sshd:session): session closed for user core May 17 00:52:32.065098 systemd[1]: sshd@4-10.200.20.39:22-10.200.16.10:56850.service: Deactivated successfully. May 17 00:52:32.066070 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:52:32.066399 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. May 17 00:52:32.067142 systemd-logind[1544]: Removed session 7. May 17 00:52:32.080005 kubelet[2031]: E0517 00:52:32.079967 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.180501 kubelet[2031]: E0517 00:52:32.180473 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.281148 kubelet[2031]: E0517 00:52:32.281121 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.381593 kubelet[2031]: E0517 00:52:32.381557 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.482044 kubelet[2031]: E0517 00:52:32.482022 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.582515 kubelet[2031]: E0517 00:52:32.582439 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.645938 kubelet[2031]: I0517 00:52:32.645912 2031 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:52:32.646125 kubelet[2031]: W0517 00:52:32.646091 2031 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 17 00:52:32.646125 kubelet[2031]: W0517 00:52:32.646089 2031 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 17 00:52:32.683292 kubelet[2031]: E0517 00:52:32.683204 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:32.683292 kubelet[2031]: E0517 00:52:32.683276 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.783582 kubelet[2031]: E0517 00:52:32.783545 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.884217 kubelet[2031]: E0517 00:52:32.883954 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:32.984432 kubelet[2031]: E0517 00:52:32.984390 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:33.085231 kubelet[2031]: E0517 00:52:33.085207 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:33.186188 kubelet[2031]: E0517 00:52:33.185974 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:33.286775 kubelet[2031]: E0517 00:52:33.286745 2031 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.200.20.39\" not found" May 17 00:52:33.387690 kubelet[2031]: I0517 00:52:33.387668 2031 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 17 00:52:33.388243 env[1559]: time="2025-05-17T00:52:33.388146896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:52:33.388553 kubelet[2031]: I0517 00:52:33.388306 2031 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 17 00:52:33.683782 kubelet[2031]: I0517 00:52:33.683749 2031 apiserver.go:52] "Watching apiserver" May 17 00:52:33.683928 kubelet[2031]: E0517 00:52:33.683889 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:33.712894 kubelet[2031]: I0517 00:52:33.712862 2031 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:52:33.721611 kubelet[2031]: I0517 00:52:33.721586 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f216ead2-5434-4436-8949-ef077d141457-cilium-config-path\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721721 kubelet[2031]: I0517 00:52:33.721615 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-net\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721721 kubelet[2031]: I0517 00:52:33.721634 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djp9m\" (UniqueName: \"kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-kube-api-access-djp9m\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721721 kubelet[2031]: I0517 00:52:33.721660 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f07a4a0-9246-4484-9625-3dd5889fad46-kube-proxy\") pod \"kube-proxy-5gdl2\" (UID: \"8f07a4a0-9246-4484-9625-3dd5889fad46\") " pod="kube-system/kube-proxy-5gdl2" May 17 00:52:33.721721 kubelet[2031]: I0517 00:52:33.721680 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f07a4a0-9246-4484-9625-3dd5889fad46-xtables-lock\") pod \"kube-proxy-5gdl2\" (UID: \"8f07a4a0-9246-4484-9625-3dd5889fad46\") " pod="kube-system/kube-proxy-5gdl2" May 17 00:52:33.721721 kubelet[2031]: I0517 00:52:33.721694 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f07a4a0-9246-4484-9625-3dd5889fad46-lib-modules\") pod \"kube-proxy-5gdl2\" (UID: \"8f07a4a0-9246-4484-9625-3dd5889fad46\") " pod="kube-system/kube-proxy-5gdl2" May 17 00:52:33.721721 kubelet[2031]: I0517 00:52:33.721708 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-bpf-maps\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721917 kubelet[2031]: I0517 00:52:33.721720 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-xtables-lock\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721917 kubelet[2031]: I0517 00:52:33.721751 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-hubble-tls\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721917 kubelet[2031]: I0517 00:52:33.721765 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzgv\" (UniqueName: \"kubernetes.io/projected/8f07a4a0-9246-4484-9625-3dd5889fad46-kube-api-access-fmzgv\") pod \"kube-proxy-5gdl2\" (UID: \"8f07a4a0-9246-4484-9625-3dd5889fad46\") " pod="kube-system/kube-proxy-5gdl2" May 17 00:52:33.721917 kubelet[2031]: I0517 00:52:33.721779 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-run\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721917 kubelet[2031]: I0517 00:52:33.721798 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-cgroup\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.721917 kubelet[2031]: I0517 00:52:33.721824 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-lib-modules\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.722050 kubelet[2031]: I0517 00:52:33.721839 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-kernel\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.722050 kubelet[2031]: I0517 00:52:33.721855 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f216ead2-5434-4436-8949-ef077d141457-clustermesh-secrets\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.722050 kubelet[2031]: I0517 00:52:33.721869 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-hostproc\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.722050 kubelet[2031]: I0517 00:52:33.721894 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cni-path\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.722050 kubelet[2031]: I0517 00:52:33.721910 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-etc-cni-netd\") pod \"cilium-cr5v4\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " pod="kube-system/cilium-cr5v4" May 17 00:52:33.823232 kubelet[2031]: I0517 00:52:33.823192 2031 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:52:33.993200 env[1559]: time="2025-05-17T00:52:33.992586288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gdl2,Uid:8f07a4a0-9246-4484-9625-3dd5889fad46,Namespace:kube-system,Attempt:0,}" May 17 00:52:33.993200 env[1559]: time="2025-05-17T00:52:33.993066335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr5v4,Uid:f216ead2-5434-4436-8949-ef077d141457,Namespace:kube-system,Attempt:0,}" May 17 00:52:34.684232 kubelet[2031]: E0517 00:52:34.684192 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:35.582808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520123101.mount: Deactivated successfully. May 17 00:52:35.605501 env[1559]: time="2025-05-17T00:52:35.605459612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.610395 env[1559]: time="2025-05-17T00:52:35.610363993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.620475 env[1559]: time="2025-05-17T00:52:35.620435837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.624017 env[1559]: time="2025-05-17T00:52:35.623974441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.628372 env[1559]: time="2025-05-17T00:52:35.628345535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.631790 env[1559]: time="2025-05-17T00:52:35.631766697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.637671 env[1559]: time="2025-05-17T00:52:35.637626010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.641344 env[1559]: time="2025-05-17T00:52:35.641309056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:35.685045 kubelet[2031]: E0517 00:52:35.684977 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:35.702923 env[1559]: time="2025-05-17T00:52:35.702735496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:35.702923 env[1559]: time="2025-05-17T00:52:35.702772176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:35.702923 env[1559]: time="2025-05-17T00:52:35.702782536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:35.703213 env[1559]: time="2025-05-17T00:52:35.703151461Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b pid=2088 runtime=io.containerd.runc.v2 May 17 00:52:35.708307 env[1559]: time="2025-05-17T00:52:35.708224204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:35.708626 env[1559]: time="2025-05-17T00:52:35.708530448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:35.708684 env[1559]: time="2025-05-17T00:52:35.708619249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:35.708931 env[1559]: time="2025-05-17T00:52:35.708892332Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83c0f47298a3039bca65d99f5015cf01b2b813649091a66edf8d57c35f350dc5 pid=2087 runtime=io.containerd.runc.v2 May 17 00:52:35.760976 env[1559]: time="2025-05-17T00:52:35.760929256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr5v4,Uid:f216ead2-5434-4436-8949-ef077d141457,Namespace:kube-system,Attempt:0,} returns sandbox id \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\"" May 17 00:52:35.763117 env[1559]: time="2025-05-17T00:52:35.763056883Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:52:35.763459 env[1559]: time="2025-05-17T00:52:35.763429447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5gdl2,Uid:8f07a4a0-9246-4484-9625-3dd5889fad46,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c0f47298a3039bca65d99f5015cf01b2b813649091a66edf8d57c35f350dc5\"" May 17 00:52:36.427768 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 17 00:52:36.685328 kubelet[2031]: E0517 00:52:36.685217 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:37.685719 kubelet[2031]: E0517 00:52:37.685618 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:38.686668 kubelet[2031]: E0517 00:52:38.686631 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:39.686955 kubelet[2031]: E0517 00:52:39.686911 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:40.687452 kubelet[2031]: E0517 00:52:40.687399 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:40.916711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12770600.mount: Deactivated successfully. May 17 00:52:41.688455 kubelet[2031]: E0517 00:52:41.688413 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:42.689022 kubelet[2031]: E0517 00:52:42.688950 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:43.071867 env[1559]: time="2025-05-17T00:52:43.071801744Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:43.077912 env[1559]: time="2025-05-17T00:52:43.077875509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:43.083038 env[1559]: time="2025-05-17T00:52:43.083000347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:43.083667 env[1559]: time="2025-05-17T00:52:43.083636991Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:52:43.085891 env[1559]: time="2025-05-17T00:52:43.085860528Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:52:43.086827 env[1559]: time="2025-05-17T00:52:43.086780655Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:52:43.115705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685109195.mount: Deactivated successfully. May 17 00:52:43.123057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946998381.mount: Deactivated successfully. May 17 00:52:43.164320 env[1559]: time="2025-05-17T00:52:43.164264627Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\"" May 17 00:52:43.165102 env[1559]: time="2025-05-17T00:52:43.165068793Z" level=info msg="StartContainer for \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\"" May 17 00:52:43.217886 env[1559]: time="2025-05-17T00:52:43.217838023Z" level=info msg="StartContainer for \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\" returns successfully" May 17 00:52:43.235982 update_engine[1546]: I0517 00:52:43.235814 1546 update_attempter.cc:509] Updating boot flags... May 17 00:52:43.690054 kubelet[2031]: E0517 00:52:43.690017 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:44.112932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b-rootfs.mount: Deactivated successfully. May 17 00:52:44.690864 kubelet[2031]: E0517 00:52:44.690805 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:45.272325 env[1559]: time="2025-05-17T00:52:45.272114333Z" level=info msg="shim disconnected" id=ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b May 17 00:52:45.272325 env[1559]: time="2025-05-17T00:52:45.272161533Z" level=warning msg="cleaning up after shim disconnected" id=ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b namespace=k8s.io May 17 00:52:45.272325 env[1559]: time="2025-05-17T00:52:45.272170693Z" level=info msg="cleaning up dead shim" May 17 00:52:45.279729 env[1559]: time="2025-05-17T00:52:45.279687302Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2245 runtime=io.containerd.runc.v2\n" May 17 00:52:45.691488 kubelet[2031]: E0517 00:52:45.691069 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:45.810306 env[1559]: time="2025-05-17T00:52:45.810265386Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:52:45.835221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3026133675.mount: Deactivated successfully. May 17 00:52:45.840883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454945089.mount: Deactivated successfully. May 17 00:52:45.901237 env[1559]: time="2025-05-17T00:52:45.900985415Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\"" May 17 00:52:45.901782 env[1559]: time="2025-05-17T00:52:45.901742540Z" level=info msg="StartContainer for \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\"" May 17 00:52:45.996492 env[1559]: time="2025-05-17T00:52:45.996160113Z" level=info msg="StartContainer for \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\" returns successfully" May 17 00:52:46.001178 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:52:46.001438 systemd[1]: Stopped systemd-sysctl.service. May 17 00:52:46.001618 systemd[1]: Stopping systemd-sysctl.service... May 17 00:52:46.003184 systemd[1]: Starting systemd-sysctl.service... May 17 00:52:46.018094 systemd[1]: Finished systemd-sysctl.service. May 17 00:52:46.056901 env[1559]: time="2025-05-17T00:52:46.056857845Z" level=info msg="shim disconnected" id=061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5 May 17 00:52:46.057161 env[1559]: time="2025-05-17T00:52:46.057141647Z" level=warning msg="cleaning up after shim disconnected" id=061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5 namespace=k8s.io May 17 00:52:46.057247 env[1559]: time="2025-05-17T00:52:46.057233287Z" level=info msg="cleaning up dead shim" May 17 00:52:46.064864 env[1559]: time="2025-05-17T00:52:46.064825293Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2311 runtime=io.containerd.runc.v2\n" May 17 00:52:46.691548 kubelet[2031]: E0517 00:52:46.691499 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:46.812520 env[1559]: time="2025-05-17T00:52:46.812478843Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:52:46.831137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5-rootfs.mount: Deactivated successfully. May 17 00:52:46.856087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281683501.mount: Deactivated successfully. May 17 00:52:46.863651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235548280.mount: Deactivated successfully. May 17 00:52:46.880588 env[1559]: time="2025-05-17T00:52:46.880527738Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\"" May 17 00:52:46.881539 env[1559]: time="2025-05-17T00:52:46.881515544Z" level=info msg="StartContainer for \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\"" May 17 00:52:46.928799 env[1559]: time="2025-05-17T00:52:46.928743191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:46.938543 env[1559]: time="2025-05-17T00:52:46.938502370Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:46.940443 env[1559]: time="2025-05-17T00:52:46.940404062Z" level=info msg="StartContainer for \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\" returns successfully" May 17 00:52:46.946690 env[1559]: time="2025-05-17T00:52:46.946145697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:46.950270 env[1559]: time="2025-05-17T00:52:46.950224562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:46.950500 env[1559]: time="2025-05-17T00:52:46.950458643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:52:46.952918 env[1559]: time="2025-05-17T00:52:46.952866658Z" level=info msg="CreateContainer within sandbox \"83c0f47298a3039bca65d99f5015cf01b2b813649091a66edf8d57c35f350dc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:52:47.573220 env[1559]: time="2025-05-17T00:52:47.573161136Z" level=info msg="CreateContainer within sandbox \"83c0f47298a3039bca65d99f5015cf01b2b813649091a66edf8d57c35f350dc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd2a0b71f5f411f15bb55bc9dd2e9827478e53c410872aab7e3bcf9270d26266\"" May 17 00:52:47.573756 env[1559]: time="2025-05-17T00:52:47.573729099Z" level=info msg="StartContainer for \"fd2a0b71f5f411f15bb55bc9dd2e9827478e53c410872aab7e3bcf9270d26266\"" May 17 00:52:47.579992 env[1559]: time="2025-05-17T00:52:47.579919095Z" level=info msg="shim disconnected" id=3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1 May 17 00:52:47.579992 env[1559]: time="2025-05-17T00:52:47.579989775Z" level=warning msg="cleaning up after shim disconnected" id=3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1 namespace=k8s.io May 17 00:52:47.579992 env[1559]: time="2025-05-17T00:52:47.580001135Z" level=info msg="cleaning up dead shim" May 17 00:52:47.591069 env[1559]: time="2025-05-17T00:52:47.590996678Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\n" May 17 00:52:47.653237 env[1559]: time="2025-05-17T00:52:47.653171113Z" level=info msg="StartContainer for \"fd2a0b71f5f411f15bb55bc9dd2e9827478e53c410872aab7e3bcf9270d26266\" returns successfully" May 17 00:52:47.691787 kubelet[2031]: E0517 00:52:47.691676 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:47.818198 env[1559]: time="2025-05-17T00:52:47.818145694Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:52:47.829138 kubelet[2031]: I0517 00:52:47.828640 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5gdl2" podStartSLOduration=5.641320722 podStartE2EDuration="16.828621994s" podCreationTimestamp="2025-05-17 00:52:31 +0000 UTC" firstStartedPulling="2025-05-17 00:52:35.764312978 +0000 UTC m=+4.827663101" lastFinishedPulling="2025-05-17 00:52:46.95161421 +0000 UTC m=+16.014964373" observedRunningTime="2025-05-17 00:52:47.828133351 +0000 UTC m=+16.891483514" watchObservedRunningTime="2025-05-17 00:52:47.828621994 +0000 UTC m=+16.891972117" May 17 00:52:47.847162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2227241929.mount: Deactivated successfully. May 17 00:52:47.852380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914055711.mount: Deactivated successfully. May 17 00:52:47.868998 env[1559]: time="2025-05-17T00:52:47.868924224Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\"" May 17 00:52:47.870019 env[1559]: time="2025-05-17T00:52:47.869986070Z" level=info msg="StartContainer for \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\"" May 17 00:52:47.920893 env[1559]: time="2025-05-17T00:52:47.920846920Z" level=info msg="StartContainer for \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\" returns successfully" May 17 00:52:47.950620 env[1559]: time="2025-05-17T00:52:47.949937566Z" level=info msg="shim disconnected" id=44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94 May 17 00:52:47.950620 env[1559]: time="2025-05-17T00:52:47.949997446Z" level=warning msg="cleaning up after shim disconnected" id=44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94 namespace=k8s.io May 17 00:52:47.950620 env[1559]: time="2025-05-17T00:52:47.950006806Z" level=info msg="cleaning up dead shim" May 17 00:52:47.959588 env[1559]: time="2025-05-17T00:52:47.959532300Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2527 runtime=io.containerd.runc.v2\n" May 17 00:52:48.692720 kubelet[2031]: E0517 00:52:48.692685 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:48.823025 env[1559]: time="2025-05-17T00:52:48.822980214Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:52:48.849345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835750344.mount: Deactivated successfully. May 17 00:52:48.856138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572945689.mount: Deactivated successfully. May 17 00:52:48.876998 env[1559]: time="2025-05-17T00:52:48.876942863Z" level=info msg="CreateContainer within sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\"" May 17 00:52:48.877837 env[1559]: time="2025-05-17T00:52:48.877785147Z" level=info msg="StartContainer for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\"" May 17 00:52:48.932664 env[1559]: time="2025-05-17T00:52:48.932604040Z" level=info msg="StartContainer for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" returns successfully" May 17 00:52:49.011592 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:52:49.071325 kubelet[2031]: I0517 00:52:49.071061 2031 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:52:49.505701 kernel: Initializing XFRM netlink socket May 17 00:52:49.514599 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:52:49.693841 kubelet[2031]: E0517 00:52:49.693796 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:49.848338 kubelet[2031]: I0517 00:52:49.848195 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cr5v4" podStartSLOduration=11.525792051 podStartE2EDuration="18.848176655s" podCreationTimestamp="2025-05-17 00:52:31 +0000 UTC" firstStartedPulling="2025-05-17 00:52:35.762612077 +0000 UTC m=+4.825962240" lastFinishedPulling="2025-05-17 00:52:43.084996681 +0000 UTC m=+12.148346844" observedRunningTime="2025-05-17 00:52:49.848009414 +0000 UTC m=+18.911359577" watchObservedRunningTime="2025-05-17 00:52:49.848176655 +0000 UTC m=+18.911526818" May 17 00:52:50.695404 kubelet[2031]: E0517 00:52:50.695350 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:51.146691 systemd-networkd[1725]: cilium_host: Link UP May 17 00:52:51.146798 systemd-networkd[1725]: cilium_net: Link UP May 17 00:52:51.146801 systemd-networkd[1725]: cilium_net: Gained carrier May 17 00:52:51.146907 systemd-networkd[1725]: cilium_host: Gained carrier May 17 00:52:51.153683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:52:51.155688 systemd-networkd[1725]: cilium_host: Gained IPv6LL May 17 00:52:51.333995 systemd-networkd[1725]: cilium_vxlan: Link UP May 17 00:52:51.334002 systemd-networkd[1725]: cilium_vxlan: Gained carrier May 17 00:52:51.584652 kernel: NET: Registered PF_ALG protocol family May 17 00:52:51.679720 kubelet[2031]: E0517 00:52:51.679645 2031 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:51.696071 kubelet[2031]: E0517 00:52:51.696026 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:51.928114 kubelet[2031]: I0517 00:52:51.927660 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhlvm\" (UniqueName: \"kubernetes.io/projected/74a469e6-3b32-4efe-b486-be1759a53b92-kube-api-access-vhlvm\") pod \"nginx-deployment-8587fbcb89-5qbvk\" (UID: \"74a469e6-3b32-4efe-b486-be1759a53b92\") " pod="default/nginx-deployment-8587fbcb89-5qbvk" May 17 00:52:52.069675 systemd-networkd[1725]: cilium_net: Gained IPv6LL May 17 00:52:52.128487 env[1559]: time="2025-05-17T00:52:52.128125978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-5qbvk,Uid:74a469e6-3b32-4efe-b486-be1759a53b92,Namespace:default,Attempt:0,}" May 17 00:52:52.291962 systemd-networkd[1725]: lxc_health: Link UP May 17 00:52:52.310237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:52:52.309813 systemd-networkd[1725]: lxc_health: Gained carrier May 17 00:52:52.690240 systemd-networkd[1725]: lxce03641cbfada: Link UP May 17 00:52:52.698724 kubelet[2031]: E0517 00:52:52.698662 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:52.701645 kernel: eth0: renamed from tmpa1b3e May 17 00:52:52.716643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce03641cbfada: link becomes ready May 17 00:52:52.716788 systemd-networkd[1725]: lxce03641cbfada: Gained carrier May 17 00:52:52.901710 systemd-networkd[1725]: cilium_vxlan: Gained IPv6LL May 17 00:52:53.477770 systemd-networkd[1725]: lxc_health: Gained IPv6LL May 17 00:52:53.698990 kubelet[2031]: E0517 00:52:53.698943 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:54.181722 systemd-networkd[1725]: lxce03641cbfada: Gained IPv6LL May 17 00:52:54.699954 kubelet[2031]: E0517 00:52:54.699898 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:55.700595 kubelet[2031]: E0517 00:52:55.700536 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:56.395696 env[1559]: time="2025-05-17T00:52:56.395460812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:56.395696 env[1559]: time="2025-05-17T00:52:56.395503492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:56.395696 env[1559]: time="2025-05-17T00:52:56.395514211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:56.396370 env[1559]: time="2025-05-17T00:52:56.396132248Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1b3ebbdd57eeee34a841b0e74d931194b4678424c5b850601cfc5fda8b1cdb1 pid=3117 runtime=io.containerd.runc.v2 May 17 00:52:56.449426 env[1559]: time="2025-05-17T00:52:56.449383562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-5qbvk,Uid:74a469e6-3b32-4efe-b486-be1759a53b92,Namespace:default,Attempt:0,} returns sandbox id \"a1b3ebbdd57eeee34a841b0e74d931194b4678424c5b850601cfc5fda8b1cdb1\"" May 17 00:52:56.451713 env[1559]: time="2025-05-17T00:52:56.451669270Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:52:56.701392 kubelet[2031]: E0517 00:52:56.700972 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:57.702078 kubelet[2031]: E0517 00:52:57.702037 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:58.702410 kubelet[2031]: E0517 00:52:58.702356 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:58.923693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804180124.mount: Deactivated successfully. May 17 00:52:59.702989 kubelet[2031]: E0517 00:52:59.702947 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:00.192872 env[1559]: time="2025-05-17T00:53:00.192819640Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:00.200916 env[1559]: time="2025-05-17T00:53:00.200817282Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:00.206660 env[1559]: time="2025-05-17T00:53:00.206622174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:00.213307 env[1559]: time="2025-05-17T00:53:00.213265502Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:00.214363 env[1559]: time="2025-05-17T00:53:00.214331657Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 17 00:53:00.217307 env[1559]: time="2025-05-17T00:53:00.217262203Z" level=info msg="CreateContainer within sandbox \"a1b3ebbdd57eeee34a841b0e74d931194b4678424c5b850601cfc5fda8b1cdb1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:53:00.246091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585516235.mount: Deactivated successfully. May 17 00:53:00.253047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503824370.mount: Deactivated successfully. May 17 00:53:00.266337 env[1559]: time="2025-05-17T00:53:00.266290528Z" level=info msg="CreateContainer within sandbox \"a1b3ebbdd57eeee34a841b0e74d931194b4678424c5b850601cfc5fda8b1cdb1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2a84b3e49d66333be055994ef34055aad35568457ac9763ce7d61963a001d6be\"" May 17 00:53:00.267003 env[1559]: time="2025-05-17T00:53:00.266972684Z" level=info msg="StartContainer for \"2a84b3e49d66333be055994ef34055aad35568457ac9763ce7d61963a001d6be\"" May 17 00:53:00.319112 env[1559]: time="2025-05-17T00:53:00.319051074Z" level=info msg="StartContainer for \"2a84b3e49d66333be055994ef34055aad35568457ac9763ce7d61963a001d6be\" returns successfully" May 17 00:53:00.703981 kubelet[2031]: E0517 00:53:00.703940 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:01.704800 kubelet[2031]: E0517 00:53:01.704752 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:02.705415 kubelet[2031]: E0517 00:53:02.705368 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:03.706250 kubelet[2031]: E0517 00:53:03.706214 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:04.707135 kubelet[2031]: E0517 00:53:04.707099 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:05.707424 kubelet[2031]: E0517 00:53:05.707389 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:06.708376 kubelet[2031]: E0517 00:53:06.708308 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:07.709083 kubelet[2031]: E0517 00:53:07.709043 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:07.977353 kubelet[2031]: I0517 00:53:07.975720 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-5qbvk" podStartSLOduration=13.21080246 podStartE2EDuration="16.975702436s" podCreationTimestamp="2025-05-17 00:52:51 +0000 UTC" firstStartedPulling="2025-05-17 00:52:56.450937914 +0000 UTC m=+25.514288077" lastFinishedPulling="2025-05-17 00:53:00.21583789 +0000 UTC m=+29.279188053" observedRunningTime="2025-05-17 00:53:00.862173266 +0000 UTC m=+29.925523429" watchObservedRunningTime="2025-05-17 00:53:07.975702436 +0000 UTC m=+37.039052599" May 17 00:53:08.006303 kubelet[2031]: I0517 00:53:08.006259 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8dbb1d74-414e-4847-aaf8-efc522400dfb-data\") pod \"nfs-server-provisioner-0\" (UID: \"8dbb1d74-414e-4847-aaf8-efc522400dfb\") " pod="default/nfs-server-provisioner-0" May 17 00:53:08.006303 kubelet[2031]: I0517 00:53:08.006307 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5bf\" (UniqueName: \"kubernetes.io/projected/8dbb1d74-414e-4847-aaf8-efc522400dfb-kube-api-access-mq5bf\") pod \"nfs-server-provisioner-0\" (UID: \"8dbb1d74-414e-4847-aaf8-efc522400dfb\") " pod="default/nfs-server-provisioner-0" May 17 00:53:08.280688 env[1559]: time="2025-05-17T00:53:08.280262177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8dbb1d74-414e-4847-aaf8-efc522400dfb,Namespace:default,Attempt:0,}" May 17 00:53:08.340042 systemd-networkd[1725]: lxc96dfaf679eef: Link UP May 17 00:53:08.354600 kernel: eth0: renamed from tmp4570a May 17 00:53:08.368527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:53:08.368652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc96dfaf679eef: link becomes ready May 17 00:53:08.368841 systemd-networkd[1725]: lxc96dfaf679eef: Gained carrier May 17 00:53:08.515927 env[1559]: time="2025-05-17T00:53:08.515840506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:08.516146 env[1559]: time="2025-05-17T00:53:08.516123145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:08.516257 env[1559]: time="2025-05-17T00:53:08.516229305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:08.516617 env[1559]: time="2025-05-17T00:53:08.516556504Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4570aeee8e59b6ca8d34b7a016488b0070244caa4ab859f817c61bea204deca3 pid=3238 runtime=io.containerd.runc.v2 May 17 00:53:08.568522 env[1559]: time="2025-05-17T00:53:08.568473263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8dbb1d74-414e-4847-aaf8-efc522400dfb,Namespace:default,Attempt:0,} returns sandbox id \"4570aeee8e59b6ca8d34b7a016488b0070244caa4ab859f817c61bea204deca3\"" May 17 00:53:08.570022 env[1559]: time="2025-05-17T00:53:08.569809258Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:53:08.710079 kubelet[2031]: E0517 00:53:08.710025 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:09.119099 systemd[1]: run-containerd-runc-k8s.io-4570aeee8e59b6ca8d34b7a016488b0070244caa4ab859f817c61bea204deca3-runc.bL9cOd.mount: Deactivated successfully. May 17 00:53:09.477745 systemd-networkd[1725]: lxc96dfaf679eef: Gained IPv6LL May 17 00:53:09.711123 kubelet[2031]: E0517 00:53:09.711070 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:10.711759 kubelet[2031]: E0517 00:53:10.711699 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:11.039634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2582301398.mount: Deactivated successfully. May 17 00:53:11.679805 kubelet[2031]: E0517 00:53:11.679761 2031 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:11.712385 kubelet[2031]: E0517 00:53:11.712351 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:12.713031 kubelet[2031]: E0517 00:53:12.712995 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:13.167305 env[1559]: time="2025-05-17T00:53:13.167258560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:13.179601 env[1559]: time="2025-05-17T00:53:13.179515358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:13.183649 env[1559]: time="2025-05-17T00:53:13.183610984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:13.187730 env[1559]: time="2025-05-17T00:53:13.187692251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:13.188465 env[1559]: time="2025-05-17T00:53:13.188432608Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 17 00:53:13.191227 env[1559]: time="2025-05-17T00:53:13.191187879Z" level=info msg="CreateContainer within sandbox \"4570aeee8e59b6ca8d34b7a016488b0070244caa4ab859f817c61bea204deca3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:53:13.212311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746597718.mount: Deactivated successfully. May 17 00:53:13.218363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007618258.mount: Deactivated successfully. May 17 00:53:13.235649 env[1559]: time="2025-05-17T00:53:13.235556649Z" level=info msg="CreateContainer within sandbox \"4570aeee8e59b6ca8d34b7a016488b0070244caa4ab859f817c61bea204deca3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"93385765f8b01f8bdc02ad5763af7a74a59c951d31a62405393808d8aac93ab6\"" May 17 00:53:13.236361 env[1559]: time="2025-05-17T00:53:13.236334166Z" level=info msg="StartContainer for \"93385765f8b01f8bdc02ad5763af7a74a59c951d31a62405393808d8aac93ab6\"" May 17 00:53:13.295758 env[1559]: time="2025-05-17T00:53:13.295717445Z" level=info msg="StartContainer for \"93385765f8b01f8bdc02ad5763af7a74a59c951d31a62405393808d8aac93ab6\" returns successfully" May 17 00:53:13.713787 kubelet[2031]: E0517 00:53:13.713748 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:13.888318 kubelet[2031]: I0517 00:53:13.888260 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.268071893 podStartE2EDuration="6.888243158s" podCreationTimestamp="2025-05-17 00:53:07 +0000 UTC" firstStartedPulling="2025-05-17 00:53:08.569550419 +0000 UTC m=+37.632900582" lastFinishedPulling="2025-05-17 00:53:13.189721684 +0000 UTC m=+42.253071847" observedRunningTime="2025-05-17 00:53:13.887261842 +0000 UTC m=+42.950611965" watchObservedRunningTime="2025-05-17 00:53:13.888243158 +0000 UTC m=+42.951593321" May 17 00:53:14.714707 kubelet[2031]: E0517 00:53:14.714658 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:15.714808 kubelet[2031]: E0517 00:53:15.714776 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:16.715908 kubelet[2031]: E0517 00:53:16.715816 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:17.716940 kubelet[2031]: E0517 00:53:17.716896 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:18.717696 kubelet[2031]: E0517 00:53:18.717656 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:19.719070 kubelet[2031]: E0517 00:53:19.719032 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:20.720094 kubelet[2031]: E0517 00:53:20.720058 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:21.721432 kubelet[2031]: E0517 00:53:21.721390 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:22.721670 kubelet[2031]: E0517 00:53:22.721634 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:22.884349 kubelet[2031]: I0517 00:53:22.884309 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-853cf840-2cf7-47cf-97e3-98c3be88cd07\" (UniqueName: \"kubernetes.io/nfs/93fd458e-b152-446c-9288-7f074bfe63d2-pvc-853cf840-2cf7-47cf-97e3-98c3be88cd07\") pod \"test-pod-1\" (UID: \"93fd458e-b152-446c-9288-7f074bfe63d2\") " pod="default/test-pod-1" May 17 00:53:22.884349 kubelet[2031]: I0517 00:53:22.884350 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smsps\" (UniqueName: \"kubernetes.io/projected/93fd458e-b152-446c-9288-7f074bfe63d2-kube-api-access-smsps\") pod \"test-pod-1\" (UID: \"93fd458e-b152-446c-9288-7f074bfe63d2\") " pod="default/test-pod-1" May 17 00:53:23.113594 kernel: FS-Cache: Loaded May 17 00:53:23.191864 kernel: RPC: Registered named UNIX socket transport module. May 17 00:53:23.192017 kernel: RPC: Registered udp transport module. May 17 00:53:23.192043 kernel: RPC: Registered tcp transport module. May 17 00:53:23.200601 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:53:23.326589 kernel: FS-Cache: Netfs 'nfs' registered for caching May 17 00:53:23.514563 kernel: NFS: Registering the id_resolver key type May 17 00:53:23.514690 kernel: Key type id_resolver registered May 17 00:53:23.517899 kernel: Key type id_legacy registered May 17 00:53:23.722744 kubelet[2031]: E0517 00:53:23.722718 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:23.948629 nfsidmap[3358]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-n-6dc47d205e' May 17 00:53:23.952774 nfsidmap[3359]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-n-6dc47d205e' May 17 00:53:24.038184 env[1559]: time="2025-05-17T00:53:24.038142195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:93fd458e-b152-446c-9288-7f074bfe63d2,Namespace:default,Attempt:0,}" May 17 00:53:24.106787 systemd-networkd[1725]: lxcf7c1e3180ff0: Link UP May 17 00:53:24.118649 kernel: eth0: renamed from tmpbfc6e May 17 00:53:24.133435 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:53:24.133543 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf7c1e3180ff0: link becomes ready May 17 00:53:24.133808 systemd-networkd[1725]: lxcf7c1e3180ff0: Gained carrier May 17 00:53:24.306298 env[1559]: time="2025-05-17T00:53:24.306218147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:24.306435 env[1559]: time="2025-05-17T00:53:24.306306466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:24.306435 env[1559]: time="2025-05-17T00:53:24.306332066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:24.306712 env[1559]: time="2025-05-17T00:53:24.306649025Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfc6ea4ad8561337c63273d8044e6996ca1de66ce81e3de5d096dc4c4ba675a0 pid=3384 runtime=io.containerd.runc.v2 May 17 00:53:24.350584 env[1559]: time="2025-05-17T00:53:24.350518673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:93fd458e-b152-446c-9288-7f074bfe63d2,Namespace:default,Attempt:0,} returns sandbox id \"bfc6ea4ad8561337c63273d8044e6996ca1de66ce81e3de5d096dc4c4ba675a0\"" May 17 00:53:24.352305 env[1559]: time="2025-05-17T00:53:24.352279548Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:53:24.668376 env[1559]: time="2025-05-17T00:53:24.668266856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:24.674287 env[1559]: time="2025-05-17T00:53:24.674253601Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:24.677625 env[1559]: time="2025-05-17T00:53:24.677599312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:24.680843 env[1559]: time="2025-05-17T00:53:24.680804664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:24.681551 env[1559]: time="2025-05-17T00:53:24.681522102Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 17 00:53:24.684497 env[1559]: time="2025-05-17T00:53:24.684468335Z" level=info msg="CreateContainer within sandbox \"bfc6ea4ad8561337c63273d8044e6996ca1de66ce81e3de5d096dc4c4ba675a0\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:53:24.723283 kubelet[2031]: E0517 00:53:24.723239 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:24.723715 env[1559]: time="2025-05-17T00:53:24.723672874Z" level=info msg="CreateContainer within sandbox \"bfc6ea4ad8561337c63273d8044e6996ca1de66ce81e3de5d096dc4c4ba675a0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1baa1d362bddf5d9fec04f7592a4fc454f1555899f69bc5674e6bc325f7edef1\"" May 17 00:53:24.724481 env[1559]: time="2025-05-17T00:53:24.724458952Z" level=info msg="StartContainer for \"1baa1d362bddf5d9fec04f7592a4fc454f1555899f69bc5674e6bc325f7edef1\"" May 17 00:53:24.771948 env[1559]: time="2025-05-17T00:53:24.771903350Z" level=info msg="StartContainer for \"1baa1d362bddf5d9fec04f7592a4fc454f1555899f69bc5674e6bc325f7edef1\" returns successfully" May 17 00:53:24.903334 kubelet[2031]: I0517 00:53:24.903120 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.571829584 podStartE2EDuration="16.903104413s" podCreationTimestamp="2025-05-17 00:53:08 +0000 UTC" firstStartedPulling="2025-05-17 00:53:24.35170683 +0000 UTC m=+53.415056993" lastFinishedPulling="2025-05-17 00:53:24.682981659 +0000 UTC m=+53.746331822" observedRunningTime="2025-05-17 00:53:24.902590814 +0000 UTC m=+53.965940977" watchObservedRunningTime="2025-05-17 00:53:24.903104413 +0000 UTC m=+53.966454576" May 17 00:53:25.069542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078714326.mount: Deactivated successfully. May 17 00:53:25.221804 systemd-networkd[1725]: lxcf7c1e3180ff0: Gained IPv6LL May 17 00:53:25.723819 kubelet[2031]: E0517 00:53:25.723778 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:26.724169 kubelet[2031]: E0517 00:53:26.724121 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:27.724301 kubelet[2031]: E0517 00:53:27.724248 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:28.725191 kubelet[2031]: E0517 00:53:28.725147 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:29.725292 kubelet[2031]: E0517 00:53:29.725246 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:30.726213 kubelet[2031]: E0517 00:53:30.726176 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:31.235961 systemd[1]: run-containerd-runc-k8s.io-7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4-runc.B2vXHl.mount: Deactivated successfully. May 17 00:53:31.251094 env[1559]: time="2025-05-17T00:53:31.250817138Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:53:31.255216 env[1559]: time="2025-05-17T00:53:31.255183168Z" level=info msg="StopContainer for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" with timeout 2 (s)" May 17 00:53:31.255555 env[1559]: time="2025-05-17T00:53:31.255534727Z" level=info msg="Stop container \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" with signal terminated" May 17 00:53:31.260889 systemd-networkd[1725]: lxc_health: Link DOWN May 17 00:53:31.260895 systemd-networkd[1725]: lxc_health: Lost carrier May 17 00:53:31.297563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4-rootfs.mount: Deactivated successfully. May 17 00:53:31.679711 kubelet[2031]: E0517 00:53:31.679585 2031 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:31.726922 kubelet[2031]: E0517 00:53:31.726885 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:31.765534 kubelet[2031]: E0517 00:53:31.765486 2031 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:53:31.807106 env[1559]: time="2025-05-17T00:53:31.807021566Z" level=error msg="collecting metrics for 7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4" error="cgroups: cgroup deleted: unknown" May 17 00:53:32.357955 env[1559]: time="2025-05-17T00:53:32.357911502Z" level=info msg="shim disconnected" id=7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4 May 17 00:53:32.358440 env[1559]: time="2025-05-17T00:53:32.358420941Z" level=warning msg="cleaning up after shim disconnected" id=7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4 namespace=k8s.io May 17 00:53:32.358542 env[1559]: time="2025-05-17T00:53:32.358528181Z" level=info msg="cleaning up dead shim" May 17 00:53:32.365855 env[1559]: time="2025-05-17T00:53:32.365816006Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3515 runtime=io.containerd.runc.v2\n" May 17 00:53:32.370924 env[1559]: time="2025-05-17T00:53:32.370886675Z" level=info msg="StopContainer for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" returns successfully" May 17 00:53:32.371705 env[1559]: time="2025-05-17T00:53:32.371672353Z" level=info msg="StopPodSandbox for \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\"" May 17 00:53:32.371790 env[1559]: time="2025-05-17T00:53:32.371753153Z" level=info msg="Container to stop \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:32.371790 env[1559]: time="2025-05-17T00:53:32.371770313Z" level=info msg="Container to stop \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:32.371790 env[1559]: time="2025-05-17T00:53:32.371781313Z" level=info msg="Container to stop \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:32.371937 env[1559]: time="2025-05-17T00:53:32.371791993Z" level=info msg="Container to stop \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:32.371937 env[1559]: time="2025-05-17T00:53:32.371804193Z" level=info msg="Container to stop \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:32.373735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b-shm.mount: Deactivated successfully. May 17 00:53:32.395457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b-rootfs.mount: Deactivated successfully. May 17 00:53:32.408320 env[1559]: time="2025-05-17T00:53:32.408276835Z" level=info msg="shim disconnected" id=cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b May 17 00:53:32.409049 env[1559]: time="2025-05-17T00:53:32.409017034Z" level=warning msg="cleaning up after shim disconnected" id=cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b namespace=k8s.io May 17 00:53:32.409149 env[1559]: time="2025-05-17T00:53:32.409134713Z" level=info msg="cleaning up dead shim" May 17 00:53:32.416367 env[1559]: time="2025-05-17T00:53:32.416332298Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3548 runtime=io.containerd.runc.v2\n" May 17 00:53:32.416789 env[1559]: time="2025-05-17T00:53:32.416766057Z" level=info msg="TearDown network for sandbox \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" successfully" May 17 00:53:32.416881 env[1559]: time="2025-05-17T00:53:32.416863817Z" level=info msg="StopPodSandbox for \"cafd38d7b34f6db931793c4219c03042ac2e6bf1a52ae9380158bca3a50f6d1b\" returns successfully" May 17 00:53:32.431253 kubelet[2031]: I0517 00:53:32.430061 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-net\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431253 kubelet[2031]: I0517 00:53:32.430122 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f216ead2-5434-4436-8949-ef077d141457-clustermesh-secrets\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431253 kubelet[2031]: I0517 00:53:32.430125 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431253 kubelet[2031]: I0517 00:53:32.430154 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-kernel\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431253 kubelet[2031]: I0517 00:53:32.430181 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cni-path\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431253 kubelet[2031]: I0517 00:53:32.430197 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-etc-cni-netd\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431516 kubelet[2031]: I0517 00:53:32.430214 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djp9m\" (UniqueName: \"kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-kube-api-access-djp9m\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431516 kubelet[2031]: I0517 00:53:32.430229 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-run\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431516 kubelet[2031]: I0517 00:53:32.430241 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-lib-modules\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431516 kubelet[2031]: I0517 00:53:32.430258 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f216ead2-5434-4436-8949-ef077d141457-cilium-config-path\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431516 kubelet[2031]: I0517 00:53:32.430273 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-hubble-tls\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431516 kubelet[2031]: I0517 00:53:32.430286 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-cgroup\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431685 kubelet[2031]: I0517 00:53:32.430300 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-hostproc\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431685 kubelet[2031]: I0517 00:53:32.430315 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-bpf-maps\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431685 kubelet[2031]: I0517 00:53:32.430330 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-xtables-lock\") pod \"f216ead2-5434-4436-8949-ef077d141457\" (UID: \"f216ead2-5434-4436-8949-ef077d141457\") " May 17 00:53:32.431685 kubelet[2031]: I0517 00:53:32.430372 2031 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-net\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.431685 kubelet[2031]: I0517 00:53:32.430393 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431685 kubelet[2031]: I0517 00:53:32.430421 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431830 kubelet[2031]: I0517 00:53:32.430439 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cni-path" (OuterVolumeSpecName: "cni-path") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431830 kubelet[2031]: I0517 00:53:32.430452 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431830 kubelet[2031]: I0517 00:53:32.431010 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431830 kubelet[2031]: I0517 00:53:32.431047 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-hostproc" (OuterVolumeSpecName: "hostproc") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431830 kubelet[2031]: I0517 00:53:32.431062 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431943 kubelet[2031]: I0517 00:53:32.431080 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.431943 kubelet[2031]: I0517 00:53:32.431094 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:32.432872 kubelet[2031]: I0517 00:53:32.432841 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f216ead2-5434-4436-8949-ef077d141457-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:53:32.435671 systemd[1]: var-lib-kubelet-pods-f216ead2\x2d5434\x2d4436\x2d8949\x2def077d141457-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:53:32.437116 kubelet[2031]: I0517 00:53:32.437021 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f216ead2-5434-4436-8949-ef077d141457-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:53:32.439961 systemd[1]: var-lib-kubelet-pods-f216ead2\x2d5434\x2d4436\x2d8949\x2def077d141457-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjp9m.mount: Deactivated successfully. May 17 00:53:32.444611 kubelet[2031]: I0517 00:53:32.441729 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-kube-api-access-djp9m" (OuterVolumeSpecName: "kube-api-access-djp9m") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "kube-api-access-djp9m". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:32.444329 systemd[1]: var-lib-kubelet-pods-f216ead2\x2d5434\x2d4436\x2d8949\x2def077d141457-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:53:32.446138 kubelet[2031]: I0517 00:53:32.446101 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f216ead2-5434-4436-8949-ef077d141457" (UID: "f216ead2-5434-4436-8949-ef077d141457"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:32.530504 kubelet[2031]: I0517 00:53:32.530472 2031 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-host-proc-sys-kernel\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530504 kubelet[2031]: I0517 00:53:32.530496 2031 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cni-path\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530504 kubelet[2031]: I0517 00:53:32.530506 2031 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-etc-cni-netd\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530517 2031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djp9m\" (UniqueName: \"kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-kube-api-access-djp9m\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530525 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-run\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530532 2031 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-lib-modules\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530540 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f216ead2-5434-4436-8949-ef077d141457-cilium-config-path\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530547 2031 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f216ead2-5434-4436-8949-ef077d141457-hubble-tls\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530555 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-cilium-cgroup\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530563 2031 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-hostproc\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530686 kubelet[2031]: I0517 00:53:32.530582 2031 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-bpf-maps\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530889 kubelet[2031]: I0517 00:53:32.530614 2031 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f216ead2-5434-4436-8949-ef077d141457-xtables-lock\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.530889 kubelet[2031]: I0517 00:53:32.530624 2031 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f216ead2-5434-4436-8949-ef077d141457-clustermesh-secrets\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:32.727177 kubelet[2031]: E0517 00:53:32.727156 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:32.904465 kubelet[2031]: I0517 00:53:32.904442 2031 scope.go:117] "RemoveContainer" containerID="7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4" May 17 00:53:32.905853 env[1559]: time="2025-05-17T00:53:32.905806415Z" level=info msg="RemoveContainer for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\"" May 17 00:53:32.914046 env[1559]: time="2025-05-17T00:53:32.913988998Z" level=info msg="RemoveContainer for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" returns successfully" May 17 00:53:32.914368 kubelet[2031]: I0517 00:53:32.914342 2031 scope.go:117] "RemoveContainer" containerID="44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94" May 17 00:53:32.915551 env[1559]: time="2025-05-17T00:53:32.915499274Z" level=info msg="RemoveContainer for \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\"" May 17 00:53:32.922936 env[1559]: time="2025-05-17T00:53:32.922896499Z" level=info msg="RemoveContainer for \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\" returns successfully" May 17 00:53:32.923161 kubelet[2031]: I0517 00:53:32.923142 2031 scope.go:117] "RemoveContainer" containerID="3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1" May 17 00:53:32.924431 env[1559]: time="2025-05-17T00:53:32.924398336Z" level=info msg="RemoveContainer for \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\"" May 17 00:53:32.931842 env[1559]: time="2025-05-17T00:53:32.931804200Z" level=info msg="RemoveContainer for \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\" returns successfully" May 17 00:53:32.932125 kubelet[2031]: I0517 00:53:32.932107 2031 scope.go:117] "RemoveContainer" containerID="061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5" May 17 00:53:32.933620 env[1559]: time="2025-05-17T00:53:32.933350036Z" level=info msg="RemoveContainer for \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\"" May 17 00:53:32.939988 env[1559]: time="2025-05-17T00:53:32.939951982Z" level=info msg="RemoveContainer for \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\" returns successfully" May 17 00:53:32.940189 kubelet[2031]: I0517 00:53:32.940156 2031 scope.go:117] "RemoveContainer" containerID="ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b" May 17 00:53:32.941253 env[1559]: time="2025-05-17T00:53:32.941226980Z" level=info msg="RemoveContainer for \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\"" May 17 00:53:32.948785 env[1559]: time="2025-05-17T00:53:32.948752364Z" level=info msg="RemoveContainer for \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\" returns successfully" May 17 00:53:32.949155 kubelet[2031]: I0517 00:53:32.949125 2031 scope.go:117] "RemoveContainer" containerID="7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4" May 17 00:53:32.949435 env[1559]: time="2025-05-17T00:53:32.949349642Z" level=error msg="ContainerStatus for \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\": not found" May 17 00:53:32.949561 kubelet[2031]: E0517 00:53:32.949532 2031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\": not found" containerID="7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4" May 17 00:53:32.949674 kubelet[2031]: I0517 00:53:32.949590 2031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4"} err="failed to get container status \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\": rpc error: code = NotFound desc = an error occurred when try to find container \"7eb090f0ed78be94271edd892d54bd2770e18d961ea282b7fea7bff66ec36ab4\": not found" May 17 00:53:32.949674 kubelet[2031]: I0517 00:53:32.949672 2031 scope.go:117] "RemoveContainer" containerID="44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94" May 17 00:53:32.949873 env[1559]: time="2025-05-17T00:53:32.949823241Z" level=error msg="ContainerStatus for \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\": not found" May 17 00:53:32.949977 kubelet[2031]: E0517 00:53:32.949952 2031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\": not found" containerID="44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94" May 17 00:53:32.950031 kubelet[2031]: I0517 00:53:32.949980 2031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94"} err="failed to get container status \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\": rpc error: code = NotFound desc = an error occurred when try to find container \"44346a02d8ba265801e9b9813747a7b98fa2b07bc195f60ff3e9ca4ff5787e94\": not found" May 17 00:53:32.950031 kubelet[2031]: I0517 00:53:32.949996 2031 scope.go:117] "RemoveContainer" containerID="3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1" May 17 00:53:32.950275 env[1559]: time="2025-05-17T00:53:32.950222281Z" level=error msg="ContainerStatus for \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\": not found" May 17 00:53:32.950489 kubelet[2031]: E0517 00:53:32.950458 2031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\": not found" containerID="3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1" May 17 00:53:32.950541 kubelet[2031]: I0517 00:53:32.950486 2031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1"} err="failed to get container status \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e48acfa1a147bd26fac7d815e56aceaed8ed4cdc0530c1954b6b6c747ff84f1\": not found" May 17 00:53:32.950541 kubelet[2031]: I0517 00:53:32.950501 2031 scope.go:117] "RemoveContainer" containerID="061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5" May 17 00:53:32.950692 env[1559]: time="2025-05-17T00:53:32.950642600Z" level=error msg="ContainerStatus for \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\": not found" May 17 00:53:32.950828 kubelet[2031]: E0517 00:53:32.950784 2031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\": not found" containerID="061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5" May 17 00:53:32.950828 kubelet[2031]: I0517 00:53:32.950818 2031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5"} err="failed to get container status \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"061d1ffb62f05807e4a58751582269467cc40fbc4dced4e1b9e0738b20a271f5\": not found" May 17 00:53:32.950911 kubelet[2031]: I0517 00:53:32.950833 2031 scope.go:117] "RemoveContainer" containerID="ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b" May 17 00:53:32.951086 env[1559]: time="2025-05-17T00:53:32.951038799Z" level=error msg="ContainerStatus for \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\": not found" May 17 00:53:32.951272 kubelet[2031]: E0517 00:53:32.951250 2031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\": not found" containerID="ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b" May 17 00:53:32.951336 kubelet[2031]: I0517 00:53:32.951273 2031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b"} err="failed to get container status \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad820a3ad9aec2a48b000ff468a07f4547be13862e6c4fa7e6a7a89f1f524b4b\": not found" May 17 00:53:33.464385 kubelet[2031]: I0517 00:53:33.464305 2031 setters.go:600] "Node became not ready" node="10.200.20.39" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:53:33Z","lastTransitionTime":"2025-05-17T00:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:53:33.727938 kubelet[2031]: E0517 00:53:33.727880 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:33.781161 kubelet[2031]: I0517 00:53:33.781128 2031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f216ead2-5434-4436-8949-ef077d141457" path="/var/lib/kubelet/pods/f216ead2-5434-4436-8949-ef077d141457/volumes" May 17 00:53:34.729185 kubelet[2031]: E0517 00:53:34.729143 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:35.206668 kubelet[2031]: E0517 00:53:35.206523 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f216ead2-5434-4436-8949-ef077d141457" containerName="mount-bpf-fs" May 17 00:53:35.206955 kubelet[2031]: E0517 00:53:35.206938 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f216ead2-5434-4436-8949-ef077d141457" containerName="clean-cilium-state" May 17 00:53:35.207042 kubelet[2031]: E0517 00:53:35.207031 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f216ead2-5434-4436-8949-ef077d141457" containerName="mount-cgroup" May 17 00:53:35.207108 kubelet[2031]: E0517 00:53:35.207088 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f216ead2-5434-4436-8949-ef077d141457" containerName="apply-sysctl-overwrites" May 17 00:53:35.207162 kubelet[2031]: E0517 00:53:35.207153 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f216ead2-5434-4436-8949-ef077d141457" containerName="cilium-agent" May 17 00:53:35.207256 kubelet[2031]: I0517 00:53:35.207246 2031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f216ead2-5434-4436-8949-ef077d141457" containerName="cilium-agent" May 17 00:53:35.343086 kubelet[2031]: I0517 00:53:35.343033 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-lib-modules\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343086 kubelet[2031]: I0517 00:53:35.343086 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-bpf-maps\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343266 kubelet[2031]: I0517 00:53:35.343107 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hostproc\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343266 kubelet[2031]: I0517 00:53:35.343125 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cni-path\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343266 kubelet[2031]: I0517 00:53:35.343151 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-etc-cni-netd\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343266 kubelet[2031]: I0517 00:53:35.343167 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-xtables-lock\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343266 kubelet[2031]: I0517 00:53:35.343185 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-config-path\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343388 kubelet[2031]: I0517 00:53:35.343204 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79ed08ab-2b68-481a-afda-79ff37d162df-cilium-config-path\") pod \"cilium-operator-5d85765b45-6bqsj\" (UID: \"79ed08ab-2b68-481a-afda-79ff37d162df\") " pod="kube-system/cilium-operator-5d85765b45-6bqsj" May 17 00:53:35.343388 kubelet[2031]: I0517 00:53:35.343234 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-net\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343388 kubelet[2031]: I0517 00:53:35.343250 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hubble-tls\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343388 kubelet[2031]: I0517 00:53:35.343266 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d897c\" (UniqueName: \"kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-kube-api-access-d897c\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343388 kubelet[2031]: I0517 00:53:35.343281 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct5j\" (UniqueName: \"kubernetes.io/projected/79ed08ab-2b68-481a-afda-79ff37d162df-kube-api-access-rct5j\") pod \"cilium-operator-5d85765b45-6bqsj\" (UID: \"79ed08ab-2b68-481a-afda-79ff37d162df\") " pod="kube-system/cilium-operator-5d85765b45-6bqsj" May 17 00:53:35.343499 kubelet[2031]: I0517 00:53:35.343311 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-run\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343499 kubelet[2031]: I0517 00:53:35.343331 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-cgroup\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343499 kubelet[2031]: I0517 00:53:35.343346 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-ipsec-secrets\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343499 kubelet[2031]: I0517 00:53:35.343361 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-kernel\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.343499 kubelet[2031]: I0517 00:53:35.343388 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-clustermesh-secrets\") pod \"cilium-q6nqc\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " pod="kube-system/cilium-q6nqc" May 17 00:53:35.511256 env[1559]: time="2025-05-17T00:53:35.511168313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6nqc,Uid:b3c5afef-bf84-4f3c-ab43-bf86b1215f73,Namespace:kube-system,Attempt:0,}" May 17 00:53:35.542844 env[1559]: time="2025-05-17T00:53:35.542469451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6bqsj,Uid:79ed08ab-2b68-481a-afda-79ff37d162df,Namespace:kube-system,Attempt:0,}" May 17 00:53:35.547500 env[1559]: time="2025-05-17T00:53:35.547398561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:35.547500 env[1559]: time="2025-05-17T00:53:35.547456361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:35.547500 env[1559]: time="2025-05-17T00:53:35.547466801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:35.548731 env[1559]: time="2025-05-17T00:53:35.547958280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9 pid=3579 runtime=io.containerd.runc.v2 May 17 00:53:35.583897 env[1559]: time="2025-05-17T00:53:35.583844449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6nqc,Uid:b3c5afef-bf84-4f3c-ab43-bf86b1215f73,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\"" May 17 00:53:35.586679 env[1559]: time="2025-05-17T00:53:35.586628723Z" level=info msg="CreateContainer within sandbox \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:53:35.589057 env[1559]: time="2025-05-17T00:53:35.588965238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:35.589057 env[1559]: time="2025-05-17T00:53:35.589016918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:35.589057 env[1559]: time="2025-05-17T00:53:35.589028078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:35.590345 env[1559]: time="2025-05-17T00:53:35.589373478Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/485abbb42152022fcee536099f1ce3f28586d7740a81a370848163a5e7cbf007 pid=3620 runtime=io.containerd.runc.v2 May 17 00:53:35.629585 env[1559]: time="2025-05-17T00:53:35.629514878Z" level=info msg="CreateContainer within sandbox \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827\"" May 17 00:53:35.630316 env[1559]: time="2025-05-17T00:53:35.630259076Z" level=info msg="StartContainer for \"5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827\"" May 17 00:53:35.638829 env[1559]: time="2025-05-17T00:53:35.638761819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6bqsj,Uid:79ed08ab-2b68-481a-afda-79ff37d162df,Namespace:kube-system,Attempt:0,} returns sandbox id \"485abbb42152022fcee536099f1ce3f28586d7740a81a370848163a5e7cbf007\"" May 17 00:53:35.641726 env[1559]: time="2025-05-17T00:53:35.641674653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:53:35.694994 env[1559]: time="2025-05-17T00:53:35.694941627Z" level=info msg="StartContainer for \"5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827\" returns successfully" May 17 00:53:35.730148 kubelet[2031]: E0517 00:53:35.730099 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:35.753001 env[1559]: time="2025-05-17T00:53:35.752939672Z" level=info msg="shim disconnected" id=5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827 May 17 00:53:35.753001 env[1559]: time="2025-05-17T00:53:35.752995192Z" level=warning msg="cleaning up after shim disconnected" id=5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827 namespace=k8s.io May 17 00:53:35.753001 env[1559]: time="2025-05-17T00:53:35.753004431Z" level=info msg="cleaning up dead shim" May 17 00:53:35.762527 env[1559]: time="2025-05-17T00:53:35.761836054Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3704 runtime=io.containerd.runc.v2\n" May 17 00:53:35.914817 env[1559]: time="2025-05-17T00:53:35.914779669Z" level=info msg="CreateContainer within sandbox \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:53:35.948339 env[1559]: time="2025-05-17T00:53:35.948285442Z" level=info msg="CreateContainer within sandbox \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd\"" May 17 00:53:35.949129 env[1559]: time="2025-05-17T00:53:35.949101361Z" level=info msg="StartContainer for \"15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd\"" May 17 00:53:35.999013 env[1559]: time="2025-05-17T00:53:35.998971101Z" level=info msg="StartContainer for \"15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd\" returns successfully" May 17 00:53:36.024381 env[1559]: time="2025-05-17T00:53:36.023965813Z" level=info msg="shim disconnected" id=15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd May 17 00:53:36.024633 env[1559]: time="2025-05-17T00:53:36.024608851Z" level=warning msg="cleaning up after shim disconnected" id=15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd namespace=k8s.io May 17 00:53:36.024706 env[1559]: time="2025-05-17T00:53:36.024692971Z" level=info msg="cleaning up dead shim" May 17 00:53:36.032131 env[1559]: time="2025-05-17T00:53:36.032092797Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3764 runtime=io.containerd.runc.v2\n" May 17 00:53:36.730843 kubelet[2031]: E0517 00:53:36.730779 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:36.766638 kubelet[2031]: E0517 00:53:36.766592 2031 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:53:36.916647 env[1559]: time="2025-05-17T00:53:36.916562592Z" level=info msg="StopPodSandbox for \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\"" May 17 00:53:36.917086 env[1559]: time="2025-05-17T00:53:36.917050111Z" level=info msg="Container to stop \"5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:36.917167 env[1559]: time="2025-05-17T00:53:36.917150231Z" level=info msg="Container to stop \"15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:36.921736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9-shm.mount: Deactivated successfully. May 17 00:53:36.978232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9-rootfs.mount: Deactivated successfully. May 17 00:53:37.007128 env[1559]: time="2025-05-17T00:53:37.006670577Z" level=info msg="shim disconnected" id=8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9 May 17 00:53:37.007392 env[1559]: time="2025-05-17T00:53:37.007369776Z" level=warning msg="cleaning up after shim disconnected" id=8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9 namespace=k8s.io May 17 00:53:37.007461 env[1559]: time="2025-05-17T00:53:37.007447775Z" level=info msg="cleaning up dead shim" May 17 00:53:37.016629 env[1559]: time="2025-05-17T00:53:37.016578958Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" May 17 00:53:37.017097 env[1559]: time="2025-05-17T00:53:37.017069517Z" level=info msg="TearDown network for sandbox \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\" successfully" May 17 00:53:37.017198 env[1559]: time="2025-05-17T00:53:37.017179437Z" level=info msg="StopPodSandbox for \"8d3501b513f1d5fc4e3847948f65c5e26f5e5a4227895db66a323d24ad7cdfa9\" returns successfully" May 17 00:53:37.051151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671003503.mount: Deactivated successfully. May 17 00:53:37.153650 kubelet[2031]: I0517 00:53:37.153599 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-lib-modules\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153650 kubelet[2031]: I0517 00:53:37.153646 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hostproc\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153871 kubelet[2031]: I0517 00:53:37.153676 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hubble-tls\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153871 kubelet[2031]: I0517 00:53:37.153692 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-cgroup\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153871 kubelet[2031]: I0517 00:53:37.153739 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-clustermesh-secrets\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153871 kubelet[2031]: I0517 00:53:37.153755 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cni-path\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153871 kubelet[2031]: I0517 00:53:37.153773 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-config-path\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.153871 kubelet[2031]: I0517 00:53:37.153788 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-net\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154014 kubelet[2031]: I0517 00:53:37.153803 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-run\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154014 kubelet[2031]: I0517 00:53:37.153820 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-ipsec-secrets\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154014 kubelet[2031]: I0517 00:53:37.153838 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-kernel\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154014 kubelet[2031]: I0517 00:53:37.153852 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-etc-cni-netd\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154014 kubelet[2031]: I0517 00:53:37.153867 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-xtables-lock\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154014 kubelet[2031]: I0517 00:53:37.153882 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-bpf-maps\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.154152 kubelet[2031]: I0517 00:53:37.153898 2031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d897c\" (UniqueName: \"kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-kube-api-access-d897c\") pod \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\" (UID: \"b3c5afef-bf84-4f3c-ab43-bf86b1215f73\") " May 17 00:53:37.156121 kubelet[2031]: I0517 00:53:37.154633 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156121 kubelet[2031]: I0517 00:53:37.154644 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156121 kubelet[2031]: I0517 00:53:37.154685 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156121 kubelet[2031]: I0517 00:53:37.154694 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hostproc" (OuterVolumeSpecName: "hostproc") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156121 kubelet[2031]: I0517 00:53:37.155368 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156390 kubelet[2031]: I0517 00:53:37.155425 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156390 kubelet[2031]: I0517 00:53:37.155444 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.156390 kubelet[2031]: I0517 00:53:37.155461 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.157382 kubelet[2031]: I0517 00:53:37.157336 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-kube-api-access-d897c" (OuterVolumeSpecName: "kube-api-access-d897c") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "kube-api-access-d897c". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:37.158153 kubelet[2031]: I0517 00:53:37.158117 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:53:37.158837 kubelet[2031]: I0517 00:53:37.158794 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.158976 kubelet[2031]: I0517 00:53:37.158959 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cni-path" (OuterVolumeSpecName: "cni-path") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:37.161182 kubelet[2031]: I0517 00:53:37.161151 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:53:37.161402 kubelet[2031]: I0517 00:53:37.161384 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:53:37.164069 kubelet[2031]: I0517 00:53:37.164023 2031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b3c5afef-bf84-4f3c-ab43-bf86b1215f73" (UID: "b3c5afef-bf84-4f3c-ab43-bf86b1215f73"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:37.254368 kubelet[2031]: I0517 00:53:37.254326 2031 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-bpf-maps\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.254602 kubelet[2031]: I0517 00:53:37.254555 2031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d897c\" (UniqueName: \"kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-kube-api-access-d897c\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.254691 kubelet[2031]: I0517 00:53:37.254677 2031 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hubble-tls\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.254759 kubelet[2031]: I0517 00:53:37.254750 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-cgroup\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.254837 kubelet[2031]: I0517 00:53:37.254828 2031 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-lib-modules\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.254902 kubelet[2031]: I0517 00:53:37.254892 2031 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-hostproc\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.254979 kubelet[2031]: I0517 00:53:37.254967 2031 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cni-path\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255049 kubelet[2031]: I0517 00:53:37.255039 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-config-path\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255120 kubelet[2031]: I0517 00:53:37.255110 2031 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-net\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255185 kubelet[2031]: I0517 00:53:37.255175 2031 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-clustermesh-secrets\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255255 kubelet[2031]: I0517 00:53:37.255245 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-ipsec-secrets\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255323 kubelet[2031]: I0517 00:53:37.255312 2031 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-host-proc-sys-kernel\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255394 kubelet[2031]: I0517 00:53:37.255384 2031 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-cilium-run\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255457 kubelet[2031]: I0517 00:53:37.255448 2031 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-etc-cni-netd\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.255524 kubelet[2031]: I0517 00:53:37.255515 2031 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3c5afef-bf84-4f3c-ab43-bf86b1215f73-xtables-lock\") on node \"10.200.20.39\" DevicePath \"\"" May 17 00:53:37.453065 systemd[1]: var-lib-kubelet-pods-b3c5afef\x2dbf84\x2d4f3c\x2dab43\x2dbf86b1215f73-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd897c.mount: Deactivated successfully. May 17 00:53:37.453215 systemd[1]: var-lib-kubelet-pods-b3c5afef\x2dbf84\x2d4f3c\x2dab43\x2dbf86b1215f73-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:53:37.453299 systemd[1]: var-lib-kubelet-pods-b3c5afef\x2dbf84\x2d4f3c\x2dab43\x2dbf86b1215f73-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:53:37.453382 systemd[1]: var-lib-kubelet-pods-b3c5afef\x2dbf84\x2d4f3c\x2dab43\x2dbf86b1215f73-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:53:37.694634 env[1559]: time="2025-05-17T00:53:37.694554064Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:37.702463 env[1559]: time="2025-05-17T00:53:37.702425769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:37.706971 env[1559]: time="2025-05-17T00:53:37.706520282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:53:37.707222 env[1559]: time="2025-05-17T00:53:37.706901481Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:53:37.710242 env[1559]: time="2025-05-17T00:53:37.710198715Z" level=info msg="CreateContainer within sandbox \"485abbb42152022fcee536099f1ce3f28586d7740a81a370848163a5e7cbf007\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:53:37.731554 kubelet[2031]: E0517 00:53:37.731504 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:37.739320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353349895.mount: Deactivated successfully. May 17 00:53:37.746416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744450041.mount: Deactivated successfully. May 17 00:53:37.757650 env[1559]: time="2025-05-17T00:53:37.757592264Z" level=info msg="CreateContainer within sandbox \"485abbb42152022fcee536099f1ce3f28586d7740a81a370848163a5e7cbf007\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5183b22b30eb162a62f4bf0bdb4095d4047565269b7baddb3faa8ca265dbdf55\"" May 17 00:53:37.758445 env[1559]: time="2025-05-17T00:53:37.758414303Z" level=info msg="StartContainer for \"5183b22b30eb162a62f4bf0bdb4095d4047565269b7baddb3faa8ca265dbdf55\"" May 17 00:53:37.812219 env[1559]: time="2025-05-17T00:53:37.812170120Z" level=info msg="StartContainer for \"5183b22b30eb162a62f4bf0bdb4095d4047565269b7baddb3faa8ca265dbdf55\" returns successfully" May 17 00:53:37.922414 kubelet[2031]: I0517 00:53:37.921880 2031 scope.go:117] "RemoveContainer" containerID="15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd" May 17 00:53:37.923764 env[1559]: time="2025-05-17T00:53:37.923723267Z" level=info msg="RemoveContainer for \"15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd\"" May 17 00:53:37.939609 env[1559]: time="2025-05-17T00:53:37.938034480Z" level=info msg="RemoveContainer for \"15730828ad036662b7e3d90dcb6fe7ac8d9efcad6360bf47b067dd994a6d3dbd\" returns successfully" May 17 00:53:37.941797 kubelet[2031]: I0517 00:53:37.941754 2031 scope.go:117] "RemoveContainer" containerID="5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827" May 17 00:53:37.943375 env[1559]: time="2025-05-17T00:53:37.943333070Z" level=info msg="RemoveContainer for \"5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827\"" May 17 00:53:37.945331 kubelet[2031]: I0517 00:53:37.945266 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6bqsj" podStartSLOduration=0.877326163 podStartE2EDuration="2.945156066s" podCreationTimestamp="2025-05-17 00:53:35 +0000 UTC" firstStartedPulling="2025-05-17 00:53:35.640790815 +0000 UTC m=+64.704140938" lastFinishedPulling="2025-05-17 00:53:37.708620718 +0000 UTC m=+66.771970841" observedRunningTime="2025-05-17 00:53:37.945073227 +0000 UTC m=+67.008423390" watchObservedRunningTime="2025-05-17 00:53:37.945156066 +0000 UTC m=+67.008506229" May 17 00:53:37.952544 env[1559]: time="2025-05-17T00:53:37.952481932Z" level=info msg="RemoveContainer for \"5805458f669625c6e9fa80fbaf8cf630d2f8d14dc99b64b25d8d7e39ec300827\" returns successfully" May 17 00:53:38.008980 kubelet[2031]: E0517 00:53:38.008932 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3c5afef-bf84-4f3c-ab43-bf86b1215f73" containerName="apply-sysctl-overwrites" May 17 00:53:38.009196 kubelet[2031]: E0517 00:53:38.009184 2031 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3c5afef-bf84-4f3c-ab43-bf86b1215f73" containerName="mount-cgroup" May 17 00:53:38.009297 kubelet[2031]: I0517 00:53:38.009285 2031 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c5afef-bf84-4f3c-ab43-bf86b1215f73" containerName="apply-sysctl-overwrites" May 17 00:53:38.159694 kubelet[2031]: I0517 00:53:38.159640 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b773423-c546-45ff-8c3c-5403fc1c8a09-clustermesh-secrets\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.159922 kubelet[2031]: I0517 00:53:38.159906 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b773423-c546-45ff-8c3c-5403fc1c8a09-cilium-ipsec-secrets\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160047 kubelet[2031]: I0517 00:53:38.160034 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-cilium-cgroup\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160171 kubelet[2031]: I0517 00:53:38.160156 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-cni-path\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160282 kubelet[2031]: I0517 00:53:38.160269 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-xtables-lock\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160397 kubelet[2031]: I0517 00:53:38.160384 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-hostproc\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160508 kubelet[2031]: I0517 00:53:38.160494 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b773423-c546-45ff-8c3c-5403fc1c8a09-cilium-config-path\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160650 kubelet[2031]: I0517 00:53:38.160634 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-host-proc-sys-kernel\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160770 kubelet[2031]: I0517 00:53:38.160755 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vn6c\" (UniqueName: \"kubernetes.io/projected/8b773423-c546-45ff-8c3c-5403fc1c8a09-kube-api-access-9vn6c\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.160900 kubelet[2031]: I0517 00:53:38.160886 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-bpf-maps\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.161002 kubelet[2031]: I0517 00:53:38.160990 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-etc-cni-netd\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.161100 kubelet[2031]: I0517 00:53:38.161087 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-lib-modules\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.161196 kubelet[2031]: I0517 00:53:38.161181 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-host-proc-sys-net\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.161297 kubelet[2031]: I0517 00:53:38.161284 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b773423-c546-45ff-8c3c-5403fc1c8a09-hubble-tls\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.161393 kubelet[2031]: I0517 00:53:38.161380 2031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b773423-c546-45ff-8c3c-5403fc1c8a09-cilium-run\") pod \"cilium-lqcwd\" (UID: \"8b773423-c546-45ff-8c3c-5403fc1c8a09\") " pod="kube-system/cilium-lqcwd" May 17 00:53:38.315266 env[1559]: time="2025-05-17T00:53:38.314682294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqcwd,Uid:8b773423-c546-45ff-8c3c-5403fc1c8a09,Namespace:kube-system,Attempt:0,}" May 17 00:53:38.345939 env[1559]: time="2025-05-17T00:53:38.345864356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:38.346152 env[1559]: time="2025-05-17T00:53:38.346126115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:38.346258 env[1559]: time="2025-05-17T00:53:38.346226235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:38.346535 env[1559]: time="2025-05-17T00:53:38.346486475Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7 pid=3865 runtime=io.containerd.runc.v2 May 17 00:53:38.381107 env[1559]: time="2025-05-17T00:53:38.381066530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqcwd,Uid:8b773423-c546-45ff-8c3c-5403fc1c8a09,Namespace:kube-system,Attempt:0,} returns sandbox id \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\"" May 17 00:53:38.384104 env[1559]: time="2025-05-17T00:53:38.384067445Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:53:38.415766 env[1559]: time="2025-05-17T00:53:38.415698025Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4c41fd8e431e24aafb04c66b1f906997eeca67c01e9c8014e1f3e24f9e0adc3\"" May 17 00:53:38.416575 env[1559]: time="2025-05-17T00:53:38.416486184Z" level=info msg="StartContainer for \"c4c41fd8e431e24aafb04c66b1f906997eeca67c01e9c8014e1f3e24f9e0adc3\"" May 17 00:53:38.484436 env[1559]: time="2025-05-17T00:53:38.483328859Z" level=info msg="StartContainer for \"c4c41fd8e431e24aafb04c66b1f906997eeca67c01e9c8014e1f3e24f9e0adc3\" returns successfully" May 17 00:53:38.504520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4c41fd8e431e24aafb04c66b1f906997eeca67c01e9c8014e1f3e24f9e0adc3-rootfs.mount: Deactivated successfully. May 17 00:53:38.798646 kubelet[2031]: E0517 00:53:38.731924 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:38.856306 env[1559]: time="2025-05-17T00:53:38.856258803Z" level=info msg="shim disconnected" id=c4c41fd8e431e24aafb04c66b1f906997eeca67c01e9c8014e1f3e24f9e0adc3 May 17 00:53:38.856604 env[1559]: time="2025-05-17T00:53:38.856540322Z" level=warning msg="cleaning up after shim disconnected" id=c4c41fd8e431e24aafb04c66b1f906997eeca67c01e9c8014e1f3e24f9e0adc3 namespace=k8s.io May 17 00:53:38.856888 env[1559]: time="2025-05-17T00:53:38.856861442Z" level=info msg="cleaning up dead shim" May 17 00:53:38.865070 env[1559]: time="2025-05-17T00:53:38.865026586Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" May 17 00:53:38.928745 env[1559]: time="2025-05-17T00:53:38.928693747Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:53:38.954735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214122366.mount: Deactivated successfully. May 17 00:53:38.974675 env[1559]: time="2025-05-17T00:53:38.974606262Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"038e578c625662ec005ab27a6ffc5faf48cbf5fe0feadc16765ce3e9577b6d11\"" May 17 00:53:38.975547 env[1559]: time="2025-05-17T00:53:38.975513060Z" level=info msg="StartContainer for \"038e578c625662ec005ab27a6ffc5faf48cbf5fe0feadc16765ce3e9577b6d11\"" May 17 00:53:39.024647 env[1559]: time="2025-05-17T00:53:39.024562729Z" level=info msg="StartContainer for \"038e578c625662ec005ab27a6ffc5faf48cbf5fe0feadc16765ce3e9577b6d11\" returns successfully" May 17 00:53:39.050553 env[1559]: time="2025-05-17T00:53:39.050432522Z" level=info msg="shim disconnected" id=038e578c625662ec005ab27a6ffc5faf48cbf5fe0feadc16765ce3e9577b6d11 May 17 00:53:39.051017 env[1559]: time="2025-05-17T00:53:39.050989481Z" level=warning msg="cleaning up after shim disconnected" id=038e578c625662ec005ab27a6ffc5faf48cbf5fe0feadc16765ce3e9577b6d11 namespace=k8s.io May 17 00:53:39.051103 env[1559]: time="2025-05-17T00:53:39.051089241Z" level=info msg="cleaning up dead shim" May 17 00:53:39.058416 env[1559]: time="2025-05-17T00:53:39.058374348Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4012 runtime=io.containerd.runc.v2\n" May 17 00:53:39.453269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374335642.mount: Deactivated successfully. May 17 00:53:39.732990 kubelet[2031]: E0517 00:53:39.732944 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:39.781588 kubelet[2031]: I0517 00:53:39.781545 2031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c5afef-bf84-4f3c-ab43-bf86b1215f73" path="/var/lib/kubelet/pods/b3c5afef-bf84-4f3c-ab43-bf86b1215f73/volumes" May 17 00:53:39.931352 env[1559]: time="2025-05-17T00:53:39.931303032Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:53:39.957938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449523083.mount: Deactivated successfully. May 17 00:53:39.965890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215451866.mount: Deactivated successfully. May 17 00:53:39.982614 env[1559]: time="2025-05-17T00:53:39.982545458Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68f2d89d29f1c92db6a6854463afc390b03487478c69e61cc067cfbb0ffd4e45\"" May 17 00:53:39.983450 env[1559]: time="2025-05-17T00:53:39.983173017Z" level=info msg="StartContainer for \"68f2d89d29f1c92db6a6854463afc390b03487478c69e61cc067cfbb0ffd4e45\"" May 17 00:53:40.033877 env[1559]: time="2025-05-17T00:53:40.033830765Z" level=info msg="StartContainer for \"68f2d89d29f1c92db6a6854463afc390b03487478c69e61cc067cfbb0ffd4e45\" returns successfully" May 17 00:53:40.062303 env[1559]: time="2025-05-17T00:53:40.062255715Z" level=info msg="shim disconnected" id=68f2d89d29f1c92db6a6854463afc390b03487478c69e61cc067cfbb0ffd4e45 May 17 00:53:40.062627 env[1559]: time="2025-05-17T00:53:40.062608074Z" level=warning msg="cleaning up after shim disconnected" id=68f2d89d29f1c92db6a6854463afc390b03487478c69e61cc067cfbb0ffd4e45 namespace=k8s.io May 17 00:53:40.062722 env[1559]: time="2025-05-17T00:53:40.062708474Z" level=info msg="cleaning up dead shim" May 17 00:53:40.070323 env[1559]: time="2025-05-17T00:53:40.070280300Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" May 17 00:53:40.733820 kubelet[2031]: E0517 00:53:40.733781 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:40.935459 env[1559]: time="2025-05-17T00:53:40.935415671Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:53:40.962996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058149590.mount: Deactivated successfully. May 17 00:53:40.969178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543850030.mount: Deactivated successfully. May 17 00:53:40.983336 env[1559]: time="2025-05-17T00:53:40.983280226Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"243b4c1e68d72c5fdc7e1cb09fa2b69a2c3a34829129199f621f92f651107ac8\"" May 17 00:53:40.984422 env[1559]: time="2025-05-17T00:53:40.984087224Z" level=info msg="StartContainer for \"243b4c1e68d72c5fdc7e1cb09fa2b69a2c3a34829129199f621f92f651107ac8\"" May 17 00:53:41.031679 env[1559]: time="2025-05-17T00:53:41.031621060Z" level=info msg="StartContainer for \"243b4c1e68d72c5fdc7e1cb09fa2b69a2c3a34829129199f621f92f651107ac8\" returns successfully" May 17 00:53:41.065116 env[1559]: time="2025-05-17T00:53:41.065068042Z" level=info msg="shim disconnected" id=243b4c1e68d72c5fdc7e1cb09fa2b69a2c3a34829129199f621f92f651107ac8 May 17 00:53:41.065373 env[1559]: time="2025-05-17T00:53:41.065352481Z" level=warning msg="cleaning up after shim disconnected" id=243b4c1e68d72c5fdc7e1cb09fa2b69a2c3a34829129199f621f92f651107ac8 namespace=k8s.io May 17 00:53:41.065434 env[1559]: time="2025-05-17T00:53:41.065420721Z" level=info msg="cleaning up dead shim" May 17 00:53:41.073680 env[1559]: time="2025-05-17T00:53:41.073633507Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" May 17 00:53:41.734919 kubelet[2031]: E0517 00:53:41.734860 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:41.768016 kubelet[2031]: E0517 00:53:41.767963 2031 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:53:41.938819 env[1559]: time="2025-05-17T00:53:41.938781430Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:53:41.967290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325974858.mount: Deactivated successfully. May 17 00:53:41.977088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757186312.mount: Deactivated successfully. May 17 00:53:41.990995 env[1559]: time="2025-05-17T00:53:41.990690418Z" level=info msg="CreateContainer within sandbox \"58b42a90133f949add2452a9e2884bbb948e1620fde84976ee080c1b73b602a7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5\"" May 17 00:53:41.991513 env[1559]: time="2025-05-17T00:53:41.991481257Z" level=info msg="StartContainer for \"510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5\"" May 17 00:53:42.044150 env[1559]: time="2025-05-17T00:53:42.044098446Z" level=info msg="StartContainer for \"510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5\" returns successfully" May 17 00:53:42.311589 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 17 00:53:42.735850 kubelet[2031]: E0517 00:53:42.735806 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:43.247869 systemd[1]: run-containerd-runc-k8s.io-510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5-runc.VRMD8Z.mount: Deactivated successfully. May 17 00:53:43.736697 kubelet[2031]: E0517 00:53:43.736644 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:44.736821 kubelet[2031]: E0517 00:53:44.736778 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:44.984077 systemd-networkd[1725]: lxc_health: Link UP May 17 00:53:45.015829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:53:45.015548 systemd-networkd[1725]: lxc_health: Gained carrier May 17 00:53:45.401044 systemd[1]: run-containerd-runc-k8s.io-510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5-runc.gZThC5.mount: Deactivated successfully. May 17 00:53:45.736968 kubelet[2031]: E0517 00:53:45.736927 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:46.335854 kubelet[2031]: I0517 00:53:46.335794 2031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lqcwd" podStartSLOduration=9.335774361 podStartE2EDuration="9.335774361s" podCreationTimestamp="2025-05-17 00:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:53:42.961375191 +0000 UTC m=+72.024725354" watchObservedRunningTime="2025-05-17 00:53:46.335774361 +0000 UTC m=+75.399124524" May 17 00:53:46.469766 systemd-networkd[1725]: lxc_health: Gained IPv6LL May 17 00:53:46.738002 kubelet[2031]: E0517 00:53:46.737958 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:47.548625 systemd[1]: run-containerd-runc-k8s.io-510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5-runc.JprrG8.mount: Deactivated successfully. May 17 00:53:47.738740 kubelet[2031]: E0517 00:53:47.738690 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:48.739179 kubelet[2031]: E0517 00:53:48.739143 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:49.679922 systemd[1]: run-containerd-runc-k8s.io-510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5-runc.PB0ztk.mount: Deactivated successfully. May 17 00:53:49.740358 kubelet[2031]: E0517 00:53:49.740320 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:50.740659 kubelet[2031]: E0517 00:53:50.740612 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:51.679648 kubelet[2031]: E0517 00:53:51.679610 2031 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:51.741230 kubelet[2031]: E0517 00:53:51.741195 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:51.795069 systemd[1]: run-containerd-runc-k8s.io-510f29ccdc1ff75c34efeed4dfc6fca23fff3a61b3d43f46a305c7e581fd96d5-runc.MoEqzh.mount: Deactivated successfully. May 17 00:53:52.742069 kubelet[2031]: E0517 00:53:52.742032 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:53.742966 kubelet[2031]: E0517 00:53:53.742926 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:54.744095 kubelet[2031]: E0517 00:53:54.744059 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:55.745043 kubelet[2031]: E0517 00:53:55.745009 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:56.745691 kubelet[2031]: E0517 00:53:56.745655 2031 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"