May 17 00:50:35.022149 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:50:35.022169 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri May 16 23:24:21 -00 2025 May 17 00:50:35.022177 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 17 00:50:35.022184 kernel: printk: bootconsole [pl11] enabled May 17 00:50:35.022189 kernel: efi: EFI v2.70 by EDK II May 17 00:50:35.022195 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 May 17 00:50:35.022201 kernel: random: crng init done May 17 00:50:35.022207 kernel: ACPI: Early table checksum verification disabled May 17 00:50:35.022212 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 17 00:50:35.022217 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022223 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022229 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 17 00:50:35.022235 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022241 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022248 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022253 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022259 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022266 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022272 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 17 00:50:35.022278 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:50:35.022283 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 17 00:50:35.022289 kernel: NUMA: Failed to initialise from firmware May 17 00:50:35.022295 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:50:35.022301 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] May 17 00:50:35.022306 kernel: Zone ranges: May 17 00:50:35.022312 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 17 00:50:35.022318 kernel: DMA32 empty May 17 00:50:35.022323 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:50:35.022330 kernel: Movable zone start for each node May 17 00:50:35.022336 kernel: Early memory node ranges May 17 00:50:35.022342 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 17 00:50:35.022347 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 17 00:50:35.022353 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 17 00:50:35.022359 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 17 00:50:35.022364 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 17 00:50:35.022370 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 17 00:50:35.022376 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:50:35.022382 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:50:35.022387 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 17 00:50:35.022393 kernel: psci: probing for conduit method from ACPI. May 17 00:50:35.022402 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:50:35.022420 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:50:35.022426 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:50:35.022433 kernel: psci: SMC Calling Convention v1.4 May 17 00:50:35.022439 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 May 17 00:50:35.022446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 May 17 00:50:35.022453 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 17 00:50:35.022459 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 17 00:50:35.022465 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:50:35.022471 kernel: Detected PIPT I-cache on CPU0 May 17 00:50:35.022477 kernel: CPU features: detected: GIC system register CPU interface May 17 00:50:35.022483 kernel: CPU features: detected: Hardware dirty bit management May 17 00:50:35.022489 kernel: CPU features: detected: Spectre-BHB May 17 00:50:35.022495 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:50:35.022501 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:50:35.022507 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:50:35.022515 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 17 00:50:35.022521 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:50:35.022527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 17 00:50:35.022533 kernel: Policy zone: Normal May 17 00:50:35.022540 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:50:35.022547 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:50:35.022553 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:50:35.022559 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:50:35.022566 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:50:35.022572 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) May 17 00:50:35.022579 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) May 17 00:50:35.022587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:50:35.022593 kernel: trace event string verifier disabled May 17 00:50:35.022600 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:50:35.022606 kernel: rcu: RCU event tracing is enabled. May 17 00:50:35.022612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:50:35.022619 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:50:35.022625 kernel: Tracing variant of Tasks RCU enabled. May 17 00:50:35.022631 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:50:35.022637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:50:35.022643 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:50:35.022649 kernel: GICv3: 960 SPIs implemented May 17 00:50:35.022657 kernel: GICv3: 0 Extended SPIs implemented May 17 00:50:35.022663 kernel: GICv3: Distributor has no Range Selector support May 17 00:50:35.022669 kernel: Root IRQ handler: gic_handle_irq May 17 00:50:35.022675 kernel: GICv3: 16 PPIs implemented May 17 00:50:35.022681 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 17 00:50:35.022687 kernel: ITS: No ITS available, not enabling LPIs May 17 00:50:35.022693 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:50:35.022699 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:50:35.022705 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:50:35.022712 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:50:35.022718 kernel: Console: colour dummy device 80x25 May 17 00:50:35.022725 kernel: printk: console [tty1] enabled May 17 00:50:35.022732 kernel: ACPI: Core revision 20210730 May 17 00:50:35.022739 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:50:35.022745 kernel: pid_max: default: 32768 minimum: 301 May 17 00:50:35.022751 kernel: LSM: Security Framework initializing May 17 00:50:35.022757 kernel: SELinux: Initializing. May 17 00:50:35.022764 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:50:35.022770 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:50:35.022777 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 17 00:50:35.022784 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 17 00:50:35.022790 kernel: rcu: Hierarchical SRCU implementation. May 17 00:50:35.022796 kernel: Remapping and enabling EFI services. May 17 00:50:35.022803 kernel: smp: Bringing up secondary CPUs ... May 17 00:50:35.022809 kernel: Detected PIPT I-cache on CPU1 May 17 00:50:35.022815 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 17 00:50:35.022822 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:50:35.022839 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:50:35.022846 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:50:35.022853 kernel: SMP: Total of 2 processors activated. May 17 00:50:35.022860 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:50:35.022867 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 17 00:50:35.022873 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:50:35.022879 kernel: CPU features: detected: CRC32 instructions May 17 00:50:35.022885 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:50:35.022892 kernel: CPU features: detected: LSE atomic instructions May 17 00:50:35.022898 kernel: CPU features: detected: Privileged Access Never May 17 00:50:35.022904 kernel: CPU: All CPU(s) started at EL1 May 17 00:50:35.022910 kernel: alternatives: patching kernel code May 17 00:50:35.022918 kernel: devtmpfs: initialized May 17 00:50:35.022928 kernel: KASLR enabled May 17 00:50:35.022935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:50:35.022943 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:50:35.022950 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:50:35.022956 kernel: SMBIOS 3.1.0 present. May 17 00:50:35.022963 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 17 00:50:35.022969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:50:35.022976 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:50:35.022984 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:50:35.022991 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:50:35.022998 kernel: audit: initializing netlink subsys (disabled) May 17 00:50:35.023004 kernel: audit: type=2000 audit(0.085:1): state=initialized audit_enabled=0 res=1 May 17 00:50:35.023011 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:50:35.023017 kernel: cpuidle: using governor menu May 17 00:50:35.023024 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:50:35.023031 kernel: ASID allocator initialised with 32768 entries May 17 00:50:35.023038 kernel: ACPI: bus type PCI registered May 17 00:50:35.023045 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:50:35.023051 kernel: Serial: AMBA PL011 UART driver May 17 00:50:35.023058 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:50:35.023065 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:50:35.023071 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:50:35.023078 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:50:35.023085 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:50:35.023092 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:50:35.023099 kernel: ACPI: Added _OSI(Module Device) May 17 00:50:35.023106 kernel: ACPI: Added _OSI(Processor Device) May 17 00:50:35.023112 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:50:35.023119 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:50:35.023126 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:50:35.023132 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:50:35.023139 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:50:35.023145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:50:35.023153 kernel: ACPI: Interpreter enabled May 17 00:50:35.023160 kernel: ACPI: Using GIC for interrupt routing May 17 00:50:35.023166 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 17 00:50:35.023173 kernel: printk: console [ttyAMA0] enabled May 17 00:50:35.023180 kernel: printk: bootconsole [pl11] disabled May 17 00:50:35.023186 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 17 00:50:35.023193 kernel: iommu: Default domain type: Translated May 17 00:50:35.023200 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:50:35.023206 kernel: vgaarb: loaded May 17 00:50:35.023213 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:50:35.023221 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:50:35.023227 kernel: PTP clock support registered May 17 00:50:35.023234 kernel: Registered efivars operations May 17 00:50:35.023240 kernel: No ACPI PMU IRQ for CPU0 May 17 00:50:35.023247 kernel: No ACPI PMU IRQ for CPU1 May 17 00:50:35.023253 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:50:35.023260 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:50:35.023266 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:50:35.023274 kernel: pnp: PnP ACPI init May 17 00:50:35.023281 kernel: pnp: PnP ACPI: found 0 devices May 17 00:50:35.023287 kernel: NET: Registered PF_INET protocol family May 17 00:50:35.023294 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:50:35.023301 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:50:35.023307 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:50:35.023314 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:50:35.023321 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:50:35.023327 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:50:35.023335 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:50:35.023342 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:50:35.023348 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:50:35.023355 kernel: PCI: CLS 0 bytes, default 64 May 17 00:50:35.023361 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 17 00:50:35.023368 kernel: kvm [1]: HYP mode not available May 17 00:50:35.023375 kernel: Initialise system trusted keyrings May 17 00:50:35.023381 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:50:35.023388 kernel: Key type asymmetric registered May 17 00:50:35.023395 kernel: Asymmetric key parser 'x509' registered May 17 00:50:35.023402 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:50:35.023417 kernel: io scheduler mq-deadline registered May 17 00:50:35.023423 kernel: io scheduler kyber registered May 17 00:50:35.023430 kernel: io scheduler bfq registered May 17 00:50:35.023437 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:50:35.023443 kernel: thunder_xcv, ver 1.0 May 17 00:50:35.023450 kernel: thunder_bgx, ver 1.0 May 17 00:50:35.023456 kernel: nicpf, ver 1.0 May 17 00:50:35.023462 kernel: nicvf, ver 1.0 May 17 00:50:35.023575 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:50:35.023635 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:50:34 UTC (1747443034) May 17 00:50:35.023644 kernel: efifb: probing for efifb May 17 00:50:35.023650 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:50:35.023657 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:50:35.023664 kernel: efifb: scrolling: redraw May 17 00:50:35.023671 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:50:35.023679 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:50:35.023686 kernel: fb0: EFI VGA frame buffer device May 17 00:50:35.023693 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 17 00:50:35.023699 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:50:35.023706 kernel: NET: Registered PF_INET6 protocol family May 17 00:50:35.023712 kernel: Segment Routing with IPv6 May 17 00:50:35.023719 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:50:35.023726 kernel: NET: Registered PF_PACKET protocol family May 17 00:50:35.023732 kernel: Key type dns_resolver registered May 17 00:50:35.023739 kernel: registered taskstats version 1 May 17 00:50:35.023747 kernel: Loading compiled-in X.509 certificates May 17 00:50:35.023754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 2fa973ae674d09a62938b8c6a2b9446b5340adb7' May 17 00:50:35.023760 kernel: Key type .fscrypt registered May 17 00:50:35.023767 kernel: Key type fscrypt-provisioning registered May 17 00:50:35.023773 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:50:35.023780 kernel: ima: Allocated hash algorithm: sha1 May 17 00:50:35.023786 kernel: ima: No architecture policies found May 17 00:50:35.023793 kernel: clk: Disabling unused clocks May 17 00:50:35.023801 kernel: Freeing unused kernel memory: 36416K May 17 00:50:35.023807 kernel: Run /init as init process May 17 00:50:35.023814 kernel: with arguments: May 17 00:50:35.023820 kernel: /init May 17 00:50:35.023827 kernel: with environment: May 17 00:50:35.023833 kernel: HOME=/ May 17 00:50:35.023840 kernel: TERM=linux May 17 00:50:35.023846 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:50:35.023855 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:50:35.023866 systemd[1]: Detected virtualization microsoft. May 17 00:50:35.023873 systemd[1]: Detected architecture arm64. May 17 00:50:35.023880 systemd[1]: Running in initrd. May 17 00:50:35.023887 systemd[1]: No hostname configured, using default hostname. May 17 00:50:35.023894 systemd[1]: Hostname set to . May 17 00:50:35.023901 systemd[1]: Initializing machine ID from random generator. May 17 00:50:35.023908 systemd[1]: Queued start job for default target initrd.target. May 17 00:50:35.023917 systemd[1]: Started systemd-ask-password-console.path. May 17 00:50:35.023924 systemd[1]: Reached target cryptsetup.target. May 17 00:50:35.023931 systemd[1]: Reached target paths.target. May 17 00:50:35.023937 systemd[1]: Reached target slices.target. May 17 00:50:35.023944 systemd[1]: Reached target swap.target. May 17 00:50:35.023951 systemd[1]: Reached target timers.target. May 17 00:50:35.023958 systemd[1]: Listening on iscsid.socket. May 17 00:50:35.023965 systemd[1]: Listening on iscsiuio.socket. May 17 00:50:35.023974 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:50:35.023981 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:50:35.023988 systemd[1]: Listening on systemd-journald.socket. May 17 00:50:35.023995 systemd[1]: Listening on systemd-networkd.socket. May 17 00:50:35.024002 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:50:35.024009 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:50:35.024016 systemd[1]: Reached target sockets.target. May 17 00:50:35.024023 systemd[1]: Starting kmod-static-nodes.service... May 17 00:50:35.024030 systemd[1]: Finished network-cleanup.service. May 17 00:50:35.024038 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:50:35.024046 systemd[1]: Starting systemd-journald.service... May 17 00:50:35.024052 systemd[1]: Starting systemd-modules-load.service... May 17 00:50:35.024059 systemd[1]: Starting systemd-resolved.service... May 17 00:50:35.024066 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:50:35.024077 systemd-journald[276]: Journal started May 17 00:50:35.024116 systemd-journald[276]: Runtime Journal (/run/log/journal/441bef90bffe49d18c3bb25d88d8d361) is 8.0M, max 78.5M, 70.5M free. May 17 00:50:35.006257 systemd-modules-load[277]: Inserted module 'overlay' May 17 00:50:35.050484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:50:35.051423 systemd[1]: Started systemd-journald.service. May 17 00:50:35.065124 kernel: Bridge firewalling registered May 17 00:50:35.065016 systemd-resolved[278]: Positive Trust Anchors: May 17 00:50:35.099689 kernel: audit: type=1130 audit(1747443035.066:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.099710 kernel: SCSI subsystem initialized May 17 00:50:35.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.065025 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:50:35.065052 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:50:35.176908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:50:35.176929 kernel: device-mapper: uevent: version 1.0.3 May 17 00:50:35.176939 kernel: audit: type=1130 audit(1747443035.131:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.065382 systemd-modules-load[277]: Inserted module 'br_netfilter' May 17 00:50:35.087579 systemd[1]: Finished kmod-static-nodes.service. May 17 00:50:35.219483 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:50:35.219505 kernel: audit: type=1130 audit(1747443035.194:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.088448 systemd-resolved[278]: Defaulting to hostname 'linux'. May 17 00:50:35.131630 systemd[1]: Started systemd-resolved.service. May 17 00:50:35.263520 kernel: audit: type=1130 audit(1747443035.219:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.263542 kernel: audit: type=1130 audit(1747443035.247:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.194827 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:50:35.219760 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:50:35.248098 systemd-modules-load[277]: Inserted module 'dm_multipath' May 17 00:50:35.248233 systemd[1]: Reached target nss-lookup.target. May 17 00:50:35.326680 kernel: audit: type=1130 audit(1747443035.301:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.272764 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:50:35.283223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:50:35.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.297128 systemd[1]: Finished systemd-modules-load.service. May 17 00:50:35.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.302541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:50:35.388622 kernel: audit: type=1130 audit(1747443035.331:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.388645 kernel: audit: type=1130 audit(1747443035.356:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.331785 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:50:35.376105 systemd[1]: Starting dracut-cmdline.service... May 17 00:50:35.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.424447 dracut-cmdline[295]: dracut-dracut-053 May 17 00:50:35.424447 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t May 17 00:50:35.424447 dracut-cmdline[295]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:50:35.462999 kernel: audit: type=1130 audit(1747443035.406:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.387614 systemd[1]: Starting systemd-sysctl.service... May 17 00:50:35.402141 systemd[1]: Finished systemd-sysctl.service. May 17 00:50:35.476861 kernel: Loading iSCSI transport class v2.0-870. May 17 00:50:35.488428 kernel: iscsi: registered transport (tcp) May 17 00:50:35.508891 kernel: iscsi: registered transport (qla4xxx) May 17 00:50:35.508933 kernel: QLogic iSCSI HBA Driver May 17 00:50:35.543108 systemd[1]: Finished dracut-cmdline.service. May 17 00:50:35.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:35.548703 systemd[1]: Starting dracut-pre-udev.service... May 17 00:50:35.601431 kernel: raid6: neonx8 gen() 13732 MB/s May 17 00:50:35.621419 kernel: raid6: neonx8 xor() 10760 MB/s May 17 00:50:35.643417 kernel: raid6: neonx4 gen() 13542 MB/s May 17 00:50:35.663418 kernel: raid6: neonx4 xor() 11077 MB/s May 17 00:50:35.683437 kernel: raid6: neonx2 gen() 12947 MB/s May 17 00:50:35.704418 kernel: raid6: neonx2 xor() 10258 MB/s May 17 00:50:35.724418 kernel: raid6: neonx1 gen() 10624 MB/s May 17 00:50:35.744417 kernel: raid6: neonx1 xor() 8786 MB/s May 17 00:50:35.765418 kernel: raid6: int64x8 gen() 6273 MB/s May 17 00:50:35.785417 kernel: raid6: int64x8 xor() 3543 MB/s May 17 00:50:35.821415 kernel: raid6: int64x4 gen() 7201 MB/s May 17 00:50:35.832435 kernel: raid6: int64x4 xor() 3853 MB/s May 17 00:50:35.846421 kernel: raid6: int64x2 gen() 6150 MB/s May 17 00:50:35.866417 kernel: raid6: int64x2 xor() 3325 MB/s May 17 00:50:35.887419 kernel: raid6: int64x1 gen() 5043 MB/s May 17 00:50:35.911933 kernel: raid6: int64x1 xor() 2647 MB/s May 17 00:50:35.911942 kernel: raid6: using algorithm neonx8 gen() 13732 MB/s May 17 00:50:35.911951 kernel: raid6: .... xor() 10760 MB/s, rmw enabled May 17 00:50:35.916113 kernel: raid6: using neon recovery algorithm May 17 00:50:35.936723 kernel: xor: measuring software checksum speed May 17 00:50:35.936734 kernel: 8regs : 17213 MB/sec May 17 00:50:35.941645 kernel: 32regs : 20702 MB/sec May 17 00:50:35.945488 kernel: arm64_neon : 27682 MB/sec May 17 00:50:35.945501 kernel: xor: using function: arm64_neon (27682 MB/sec) May 17 00:50:36.005424 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 17 00:50:36.015745 systemd[1]: Finished dracut-pre-udev.service. May 17 00:50:36.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:36.024000 audit: BPF prog-id=7 op=LOAD May 17 00:50:36.024000 audit: BPF prog-id=8 op=LOAD May 17 00:50:36.025444 systemd[1]: Starting systemd-udevd.service... May 17 00:50:36.043818 systemd-udevd[476]: Using default interface naming scheme 'v252'. May 17 00:50:36.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:36.050796 systemd[1]: Started systemd-udevd.service. May 17 00:50:36.061716 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:50:36.076314 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 17 00:50:36.113066 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:50:36.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:36.118571 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:50:36.156181 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:50:36.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:36.206437 kernel: hv_vmbus: Vmbus version:5.3 May 17 00:50:36.224588 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:50:36.224645 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:50:36.224655 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:50:36.224663 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 17 00:50:36.263145 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 17 00:50:36.263195 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:50:36.263328 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:50:36.272466 kernel: scsi host0: storvsc_host_t May 17 00:50:36.272636 kernel: scsi host1: storvsc_host_t May 17 00:50:36.282223 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:50:36.290421 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:50:36.306934 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:50:36.319164 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:50:36.319177 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:50:36.350330 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:50:36.350459 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:50:36.350547 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:50:36.350627 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:50:36.350701 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:50:36.350775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:50:36.350785 kernel: hv_netvsc 002248b7-2ac9-0022-48b7-2ac9002248b7 eth0: VF slot 1 added May 17 00:50:36.350865 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:50:36.366338 kernel: hv_vmbus: registering driver hv_pci May 17 00:50:36.366380 kernel: hv_pci c40211da-ed4b-4803-9f65-b522bfb6f91e: PCI VMBus probing: Using version 0x10004 May 17 00:50:36.477392 kernel: hv_pci c40211da-ed4b-4803-9f65-b522bfb6f91e: PCI host bridge to bus ed4b:00 May 17 00:50:36.477531 kernel: pci_bus ed4b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 17 00:50:36.477624 kernel: pci_bus ed4b:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:50:36.477694 kernel: pci ed4b:00:02.0: [15b3:1018] type 00 class 0x020000 May 17 00:50:36.477782 kernel: pci ed4b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:50:36.477857 kernel: pci ed4b:00:02.0: enabling Extended Tags May 17 00:50:36.477934 kernel: pci ed4b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ed4b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 17 00:50:36.478010 kernel: pci_bus ed4b:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:50:36.478078 kernel: pci ed4b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:50:36.515436 kernel: mlx5_core ed4b:00:02.0: firmware version: 16.30.1284 May 17 00:50:36.735804 kernel: mlx5_core ed4b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) May 17 00:50:36.735966 kernel: hv_netvsc 002248b7-2ac9-0022-48b7-2ac9002248b7 eth0: VF registering: eth1 May 17 00:50:36.736054 kernel: mlx5_core ed4b:00:02.0 eth1: joined to eth0 May 17 00:50:36.743462 kernel: mlx5_core ed4b:00:02.0 enP60747s1: renamed from eth1 May 17 00:50:36.853080 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:50:36.874430 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (533) May 17 00:50:36.887369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:50:37.019550 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:50:37.024964 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:50:37.036978 systemd[1]: Starting disk-uuid.service... May 17 00:50:37.058921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:50:38.072795 disk-uuid[599]: The operation has completed successfully. May 17 00:50:38.077532 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:50:38.130697 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:50:38.135551 systemd[1]: Finished disk-uuid.service. May 17 00:50:38.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.144522 systemd[1]: Starting verity-setup.service... May 17 00:50:38.183616 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:50:38.434659 systemd[1]: Found device dev-mapper-usr.device. May 17 00:50:38.440027 systemd[1]: Mounting sysusr-usr.mount... May 17 00:50:38.448853 systemd[1]: Finished verity-setup.service. May 17 00:50:38.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.507433 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:50:38.508169 systemd[1]: Mounted sysusr-usr.mount. May 17 00:50:38.512072 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:50:38.512894 systemd[1]: Starting ignition-setup.service... May 17 00:50:38.527852 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:50:38.558308 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:50:38.558346 kernel: BTRFS info (device sda6): using free space tree May 17 00:50:38.562953 kernel: BTRFS info (device sda6): has skinny extents May 17 00:50:38.602885 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:50:38.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.611000 audit: BPF prog-id=9 op=LOAD May 17 00:50:38.612023 systemd[1]: Starting systemd-networkd.service... May 17 00:50:38.637198 systemd-networkd[840]: lo: Link UP May 17 00:50:38.637210 systemd-networkd[840]: lo: Gained carrier May 17 00:50:38.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.637926 systemd-networkd[840]: Enumeration completed May 17 00:50:38.640412 systemd[1]: Started systemd-networkd.service. May 17 00:50:38.644828 systemd[1]: Reached target network.target. May 17 00:50:38.652312 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:50:38.657768 systemd[1]: Starting iscsiuio.service... May 17 00:50:38.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.670623 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:50:38.670952 systemd[1]: Started iscsiuio.service. May 17 00:50:38.681084 systemd[1]: Starting iscsid.service... May 17 00:50:38.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.702242 iscsid[852]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:50:38.702242 iscsid[852]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:50:38.702242 iscsid[852]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:50:38.702242 iscsid[852]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:50:38.702242 iscsid[852]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:50:38.702242 iscsid[852]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:50:38.702242 iscsid[852]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:50:38.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.692076 systemd[1]: Started iscsid.service. May 17 00:50:38.699693 systemd[1]: Starting dracut-initqueue.service... May 17 00:50:38.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.716983 systemd[1]: Finished dracut-initqueue.service. May 17 00:50:38.721638 systemd[1]: Reached target remote-fs-pre.target. May 17 00:50:38.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:38.750245 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:50:38.767988 systemd[1]: Reached target remote-fs.target. May 17 00:50:38.780325 systemd[1]: Starting dracut-pre-mount.service... May 17 00:50:38.806272 systemd[1]: Finished dracut-pre-mount.service. May 17 00:50:38.822870 systemd[1]: Finished ignition-setup.service. May 17 00:50:38.829114 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:50:38.879423 kernel: mlx5_core ed4b:00:02.0 enP60747s1: Link up May 17 00:50:38.921706 kernel: hv_netvsc 002248b7-2ac9-0022-48b7-2ac9002248b7 eth0: Data path switched to VF: enP60747s1 May 17 00:50:38.921916 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:50:38.922358 systemd-networkd[840]: enP60747s1: Link UP May 17 00:50:38.922492 systemd-networkd[840]: eth0: Link UP May 17 00:50:38.922613 systemd-networkd[840]: eth0: Gained carrier May 17 00:50:38.935923 systemd-networkd[840]: enP60747s1: Gained carrier May 17 00:50:38.953476 systemd-networkd[840]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:50:40.538593 systemd-networkd[840]: eth0: Gained IPv6LL May 17 00:50:41.576355 ignition[867]: Ignition 2.14.0 May 17 00:50:41.576368 ignition[867]: Stage: fetch-offline May 17 00:50:41.576447 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:41.576473 ignition[867]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:41.642658 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:41.642806 ignition[867]: parsed url from cmdline: "" May 17 00:50:41.642810 ignition[867]: no config URL provided May 17 00:50:41.642815 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:50:41.658848 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:50:41.697522 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:50:41.697547 kernel: audit: type=1130 audit(1747443041.668:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.642823 ignition[867]: no config at "/usr/lib/ignition/user.ign" May 17 00:50:41.670439 systemd[1]: Starting ignition-fetch.service... May 17 00:50:41.642828 ignition[867]: failed to fetch config: resource requires networking May 17 00:50:41.643149 ignition[867]: Ignition finished successfully May 17 00:50:41.700960 ignition[874]: Ignition 2.14.0 May 17 00:50:41.700967 ignition[874]: Stage: fetch May 17 00:50:41.701075 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:41.701094 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:41.707161 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:41.707308 ignition[874]: parsed url from cmdline: "" May 17 00:50:41.707312 ignition[874]: no config URL provided May 17 00:50:41.707324 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:50:41.707332 ignition[874]: no config at "/usr/lib/ignition/user.ign" May 17 00:50:41.707365 ignition[874]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:50:41.798576 ignition[874]: GET result: OK May 17 00:50:41.798649 ignition[874]: config has been read from IMDS userdata May 17 00:50:41.801117 unknown[874]: fetched base config from "system" May 17 00:50:41.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.798674 ignition[874]: parsing config with SHA512: 5d46090d6f6fb474aee2ebf1c999a0c5b1776b76345f6af0a22bf6e8f655a06f762c408ce63076fd702551d620a1715264da062dbda06036f91b7c0035a96ed9 May 17 00:50:41.801124 unknown[874]: fetched base config from "system" May 17 00:50:41.843083 kernel: audit: type=1130 audit(1747443041.811:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.801537 ignition[874]: fetch: fetch complete May 17 00:50:41.801130 unknown[874]: fetched user config from "azure" May 17 00:50:41.801542 ignition[874]: fetch: fetch passed May 17 00:50:41.802712 systemd[1]: Finished ignition-fetch.service. May 17 00:50:41.801592 ignition[874]: Ignition finished successfully May 17 00:50:41.884751 kernel: audit: type=1130 audit(1747443041.862:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.831997 systemd[1]: Starting ignition-kargs.service... May 17 00:50:41.845614 ignition[880]: Ignition 2.14.0 May 17 00:50:41.854430 systemd[1]: Finished ignition-kargs.service. May 17 00:50:41.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.845621 ignition[880]: Stage: kargs May 17 00:50:41.926579 kernel: audit: type=1130 audit(1747443041.896:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:41.863643 systemd[1]: Starting ignition-disks.service... May 17 00:50:41.845747 ignition[880]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:41.889719 systemd[1]: Finished ignition-disks.service. May 17 00:50:41.845766 ignition[880]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:41.896916 systemd[1]: Reached target initrd-root-device.target. May 17 00:50:41.848482 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:41.919422 systemd[1]: Reached target local-fs-pre.target. May 17 00:50:41.850546 ignition[880]: kargs: kargs passed May 17 00:50:41.924481 systemd[1]: Reached target local-fs.target. May 17 00:50:41.850611 ignition[880]: Ignition finished successfully May 17 00:50:41.930738 systemd[1]: Reached target sysinit.target. May 17 00:50:41.873645 ignition[886]: Ignition 2.14.0 May 17 00:50:41.938421 systemd[1]: Reached target basic.target. May 17 00:50:41.873652 ignition[886]: Stage: disks May 17 00:50:41.951799 systemd[1]: Starting systemd-fsck-root.service... May 17 00:50:41.873768 ignition[886]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:41.873789 ignition[886]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:41.876487 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:41.885958 ignition[886]: disks: disks passed May 17 00:50:41.886016 ignition[886]: Ignition finished successfully May 17 00:50:42.151919 systemd-fsck[894]: ROOT: clean, 619/7326000 files, 481078/7359488 blocks May 17 00:50:42.166361 systemd[1]: Finished systemd-fsck-root.service. May 17 00:50:42.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:42.193059 systemd[1]: Mounting sysroot.mount... May 17 00:50:42.201437 kernel: audit: type=1130 audit(1747443042.170:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:42.214433 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:50:42.214980 systemd[1]: Mounted sysroot.mount. May 17 00:50:42.218734 systemd[1]: Reached target initrd-root-fs.target. May 17 00:50:42.258750 systemd[1]: Mounting sysroot-usr.mount... May 17 00:50:42.263132 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:50:42.270385 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:50:42.270447 systemd[1]: Reached target ignition-diskful.target. May 17 00:50:42.276400 systemd[1]: Mounted sysroot-usr.mount. May 17 00:50:42.331444 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:50:42.336131 systemd[1]: Starting initrd-setup-root.service... May 17 00:50:42.364608 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (905) May 17 00:50:42.364690 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:50:42.364711 kernel: BTRFS info (device sda6): using free space tree May 17 00:50:42.369427 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:50:42.380078 kernel: BTRFS info (device sda6): has skinny extents May 17 00:50:42.383757 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:50:42.395053 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory May 17 00:50:42.419575 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:50:42.429598 initrd-setup-root[952]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:50:42.957023 systemd[1]: Finished initrd-setup-root.service. May 17 00:50:42.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:42.979948 systemd[1]: Starting ignition-mount.service... May 17 00:50:42.991878 kernel: audit: type=1130 audit(1747443042.961:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:42.985945 systemd[1]: Starting sysroot-boot.service... May 17 00:50:42.995860 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:50:42.995960 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:50:43.024254 systemd[1]: Finished sysroot-boot.service. May 17 00:50:43.032530 ignition[974]: INFO : Ignition 2.14.0 May 17 00:50:43.032530 ignition[974]: INFO : Stage: mount May 17 00:50:43.032530 ignition[974]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:43.032530 ignition[974]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:43.032530 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:43.101916 kernel: audit: type=1130 audit(1747443043.032:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:43.101940 kernel: audit: type=1130 audit(1747443043.057:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:43.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:43.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:43.050935 systemd[1]: Finished ignition-mount.service. May 17 00:50:43.105675 ignition[974]: INFO : mount: mount passed May 17 00:50:43.105675 ignition[974]: INFO : Ignition finished successfully May 17 00:50:43.757518 coreos-metadata[904]: May 17 00:50:43.757 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:50:43.766986 coreos-metadata[904]: May 17 00:50:43.766 INFO Fetch successful May 17 00:50:43.799741 coreos-metadata[904]: May 17 00:50:43.799 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:50:43.812304 coreos-metadata[904]: May 17 00:50:43.811 INFO Fetch successful May 17 00:50:43.834063 coreos-metadata[904]: May 17 00:50:43.834 INFO wrote hostname ci-3510.3.7-n-44db7a48ea to /sysroot/etc/hostname May 17 00:50:43.843171 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:50:43.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:43.870518 systemd[1]: Starting ignition-files.service... May 17 00:50:43.879808 kernel: audit: type=1130 audit(1747443043.847:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:43.880552 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:50:43.897545 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (983) May 17 00:50:43.908342 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:50:43.908366 kernel: BTRFS info (device sda6): using free space tree May 17 00:50:43.908375 kernel: BTRFS info (device sda6): has skinny extents May 17 00:50:43.917220 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:50:43.930269 ignition[1002]: INFO : Ignition 2.14.0 May 17 00:50:43.930269 ignition[1002]: INFO : Stage: files May 17 00:50:43.941035 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:43.941035 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:43.941035 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:43.941035 ignition[1002]: DEBUG : files: compiled without relabeling support, skipping May 17 00:50:43.941035 ignition[1002]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:50:43.941035 ignition[1002]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:50:44.006025 ignition[1002]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:50:44.014202 ignition[1002]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:50:44.014202 ignition[1002]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:50:44.013689 unknown[1002]: wrote ssh authorized keys file for user: core May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3022980336" May 17 00:50:44.035838 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3022980336": device or resource busy May 17 00:50:44.035838 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3022980336", trying btrfs: device or resource busy May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3022980336" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3022980336" May 17 00:50:44.035838 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem3022980336" May 17 00:50:44.034547 systemd[1]: mnt-oem3022980336.mount: Deactivated successfully. May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem3022980336" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4043260170" May 17 00:50:44.195116 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4043260170": device or resource busy May 17 00:50:44.195116 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4043260170", trying btrfs: device or resource busy May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4043260170" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4043260170" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem4043260170" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem4043260170" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:50:44.195116 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 17 00:50:44.775138 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK May 17 00:50:45.068484 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(f): [started] processing unit "waagent.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(f): [finished] processing unit "waagent.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(10): [started] processing unit "nvidia.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(10): [finished] processing unit "nvidia.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" May 17 00:50:45.081749 ignition[1002]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:50:45.081749 ignition[1002]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:50:45.081749 ignition[1002]: INFO : files: files passed May 17 00:50:45.081749 ignition[1002]: INFO : Ignition finished successfully May 17 00:50:45.219094 kernel: audit: type=1130 audit(1747443045.086:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.081712 systemd[1]: Finished ignition-files.service. May 17 00:50:45.089762 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:50:45.111870 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:50:45.247194 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:50:45.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.118925 systemd[1]: Starting ignition-quench.service... May 17 00:50:45.136731 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:50:45.149034 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:50:45.149120 systemd[1]: Finished ignition-quench.service. May 17 00:50:45.160056 systemd[1]: Reached target ignition-complete.target. May 17 00:50:45.172563 systemd[1]: Starting initrd-parse-etc.service... May 17 00:50:45.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.193964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:50:45.194069 systemd[1]: Finished initrd-parse-etc.service. May 17 00:50:45.206546 systemd[1]: Reached target initrd-fs.target. May 17 00:50:45.214614 systemd[1]: Reached target initrd.target. May 17 00:50:45.222963 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:50:45.223786 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:50:45.246690 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:50:45.253181 systemd[1]: Starting initrd-cleanup.service... May 17 00:50:45.271308 systemd[1]: Stopped target nss-lookup.target. May 17 00:50:45.278539 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:50:45.288128 systemd[1]: Stopped target timers.target. May 17 00:50:45.295985 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:50:45.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.296101 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:50:45.304354 systemd[1]: Stopped target initrd.target. May 17 00:50:45.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.312237 systemd[1]: Stopped target basic.target. May 17 00:50:45.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.320700 systemd[1]: Stopped target ignition-complete.target. May 17 00:50:45.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.329272 systemd[1]: Stopped target ignition-diskful.target. May 17 00:50:45.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.337257 systemd[1]: Stopped target initrd-root-device.target. May 17 00:50:45.345453 systemd[1]: Stopped target remote-fs.target. May 17 00:50:45.462456 ignition[1041]: INFO : Ignition 2.14.0 May 17 00:50:45.462456 ignition[1041]: INFO : Stage: umount May 17 00:50:45.462456 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:50:45.462456 ignition[1041]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:50:45.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.355938 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:50:45.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.515826 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:50:45.515826 ignition[1041]: INFO : umount: umount passed May 17 00:50:45.515826 ignition[1041]: INFO : Ignition finished successfully May 17 00:50:45.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.364008 systemd[1]: Stopped target sysinit.target. May 17 00:50:45.371795 systemd[1]: Stopped target local-fs.target. May 17 00:50:45.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.379473 systemd[1]: Stopped target local-fs-pre.target. May 17 00:50:45.387681 systemd[1]: Stopped target swap.target. May 17 00:50:45.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.395556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:50:45.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.395683 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:50:45.403634 systemd[1]: Stopped target cryptsetup.target. May 17 00:50:45.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.412518 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:50:45.412622 systemd[1]: Stopped dracut-initqueue.service. May 17 00:50:45.420472 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:50:45.420567 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:50:45.428984 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:50:45.429067 systemd[1]: Stopped ignition-files.service. May 17 00:50:45.436647 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:50:45.436735 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:50:45.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.446199 systemd[1]: Stopping ignition-mount.service... May 17 00:50:45.466774 systemd[1]: Stopping iscsiuio.service... May 17 00:50:45.474114 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:50:45.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.474319 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:50:45.698000 audit: BPF prog-id=6 op=UNLOAD May 17 00:50:45.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.485800 systemd[1]: Stopping sysroot-boot.service... May 17 00:50:45.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.500525 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:50:45.500741 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:50:45.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.507670 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:50:45.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.507820 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:50:45.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.522136 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:50:45.522247 systemd[1]: Stopped iscsiuio.service. May 17 00:50:45.530376 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:50:45.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.530478 systemd[1]: Stopped ignition-mount.service. May 17 00:50:45.551163 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:50:45.552472 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:50:45.552603 systemd[1]: Stopped ignition-disks.service. May 17 00:50:45.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.563804 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:50:45.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.563917 systemd[1]: Stopped ignition-kargs.service. May 17 00:50:45.568025 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:50:45.842877 kernel: hv_netvsc 002248b7-2ac9-0022-48b7-2ac9002248b7 eth0: Data path switched from VF: enP60747s1 May 17 00:50:45.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.568112 systemd[1]: Stopped ignition-fetch.service. May 17 00:50:45.581627 systemd[1]: Stopped target network.target. May 17 00:50:45.590971 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:50:45.591087 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:50:45.599506 systemd[1]: Stopped target paths.target. May 17 00:50:45.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.607372 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:50:45.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.611430 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:50:45.617760 systemd[1]: Stopped target slices.target. May 17 00:50:45.626628 systemd[1]: Stopped target sockets.target. May 17 00:50:45.642668 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:50:45.642740 systemd[1]: Closed iscsid.socket. May 17 00:50:45.650639 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:50:45.650707 systemd[1]: Closed iscsiuio.socket. May 17 00:50:45.657742 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:50:45.657830 systemd[1]: Stopped ignition-setup.service. May 17 00:50:45.667441 systemd[1]: Stopping systemd-networkd.service... May 17 00:50:45.676356 systemd[1]: Stopping systemd-resolved.service... May 17 00:50:45.684458 systemd-networkd[840]: eth0: DHCPv6 lease lost May 17 00:50:45.932000 audit: BPF prog-id=9 op=UNLOAD May 17 00:50:45.685917 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:50:45.686073 systemd[1]: Stopped systemd-resolved.service. May 17 00:50:45.694861 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:50:45.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.694963 systemd[1]: Stopped systemd-networkd.service. May 17 00:50:45.703897 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:50:45.703979 systemd[1]: Finished initrd-cleanup.service. May 17 00:50:45.712180 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:50:45.712226 systemd[1]: Closed systemd-networkd.socket. May 17 00:50:45.720350 systemd[1]: Stopping network-cleanup.service... May 17 00:50:45.729559 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:50:45.729631 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:50:45.734549 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:50:45.734594 systemd[1]: Stopped systemd-sysctl.service. May 17 00:50:45.748022 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:50:45.748067 systemd[1]: Stopped systemd-modules-load.service. May 17 00:50:45.752664 systemd[1]: Stopping systemd-udevd.service... May 17 00:50:45.761237 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:50:45.769937 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:50:45.770073 systemd[1]: Stopped systemd-udevd.service. May 17 00:50:45.774525 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:50:45.774567 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:50:45.783905 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:50:45.783945 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:50:45.791809 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:50:45.791859 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:50:45.805288 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:50:45.805335 systemd[1]: Stopped dracut-cmdline.service. May 17 00:50:45.815820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:50:45.815862 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:50:45.842307 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:50:45.855567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:50:45.855639 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:50:45.871096 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:50:45.871196 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:50:46.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:45.941988 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:50:45.942105 systemd[1]: Stopped network-cleanup.service. May 17 00:50:46.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:46.090168 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:50:46.090277 systemd[1]: Stopped sysroot-boot.service. May 17 00:50:46.101900 systemd[1]: Reached target initrd-switch-root.target. May 17 00:50:46.110987 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:50:46.111057 systemd[1]: Stopped initrd-setup-root.service. May 17 00:50:46.120572 systemd[1]: Starting initrd-switch-root.service... May 17 00:50:46.137089 systemd[1]: Switching root. May 17 00:50:46.171429 iscsid[852]: iscsid shutting down. May 17 00:50:46.174951 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). May 17 00:50:46.174996 systemd-journald[276]: Journal stopped May 17 00:50:58.685703 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:50:58.685723 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:50:58.685733 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:50:58.685744 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:50:58.685753 kernel: SELinux: policy capability open_perms=1 May 17 00:50:58.685761 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:50:58.685770 kernel: SELinux: policy capability always_check_network=0 May 17 00:50:58.685778 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:50:58.685786 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:50:58.685794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:50:58.685803 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:50:58.685812 kernel: kauditd_printk_skb: 41 callbacks suppressed May 17 00:50:58.685821 kernel: audit: type=1403 audit(1747443048.696:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:50:58.685831 systemd[1]: Successfully loaded SELinux policy in 287.102ms. May 17 00:50:58.685842 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 63.886ms. May 17 00:50:58.685854 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:50:58.685863 systemd[1]: Detected virtualization microsoft. May 17 00:50:58.685873 systemd[1]: Detected architecture arm64. May 17 00:50:58.685884 systemd[1]: Detected first boot. May 17 00:50:58.685894 systemd[1]: Hostname set to . May 17 00:50:58.685903 systemd[1]: Initializing machine ID from random generator. May 17 00:50:58.685912 kernel: audit: type=1400 audit(1747443050.134:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:50:58.685923 kernel: audit: type=1400 audit(1747443050.138:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:50:58.685932 kernel: audit: type=1334 audit(1747443050.152:83): prog-id=10 op=LOAD May 17 00:50:58.685941 kernel: audit: type=1334 audit(1747443050.152:84): prog-id=10 op=UNLOAD May 17 00:50:58.685949 kernel: audit: type=1334 audit(1747443050.169:85): prog-id=11 op=LOAD May 17 00:50:58.685958 kernel: audit: type=1334 audit(1747443050.169:86): prog-id=11 op=UNLOAD May 17 00:50:58.685966 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:50:58.685976 kernel: audit: type=1400 audit(1747443051.326:87): avc: denied { associate } for pid=1074 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:50:58.685987 kernel: audit: type=1300 audit(1747443051.326:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222fc a1=40000283d8 a2=4000026840 a3=32 items=0 ppid=1057 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.685997 kernel: audit: type=1327 audit(1747443051.326:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:50:58.686006 systemd[1]: Populated /etc with preset unit settings. May 17 00:50:58.686015 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:50:58.686025 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:50:58.686035 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:50:58.686045 kernel: kauditd_printk_skb: 6 callbacks suppressed May 17 00:50:58.686054 kernel: audit: type=1334 audit(1747443058.030:89): prog-id=12 op=LOAD May 17 00:50:58.686062 kernel: audit: type=1334 audit(1747443058.030:90): prog-id=3 op=UNLOAD May 17 00:50:58.686071 kernel: audit: type=1334 audit(1747443058.030:91): prog-id=13 op=LOAD May 17 00:50:58.686080 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:50:58.686091 kernel: audit: type=1334 audit(1747443058.030:92): prog-id=14 op=LOAD May 17 00:50:58.686100 systemd[1]: Stopped iscsid.service. May 17 00:50:58.686110 kernel: audit: type=1334 audit(1747443058.030:93): prog-id=4 op=UNLOAD May 17 00:50:58.686120 kernel: audit: type=1334 audit(1747443058.030:94): prog-id=5 op=UNLOAD May 17 00:50:58.686129 kernel: audit: type=1334 audit(1747443058.036:95): prog-id=15 op=LOAD May 17 00:50:58.686138 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:50:58.686147 kernel: audit: type=1334 audit(1747443058.036:96): prog-id=12 op=UNLOAD May 17 00:50:58.686156 systemd[1]: Stopped initrd-switch-root.service. May 17 00:50:58.686165 kernel: audit: type=1334 audit(1747443058.041:97): prog-id=16 op=LOAD May 17 00:50:58.686174 kernel: audit: type=1334 audit(1747443058.047:98): prog-id=17 op=LOAD May 17 00:50:58.686183 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:50:58.686193 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:50:58.686203 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:50:58.686212 systemd[1]: Created slice system-getty.slice. May 17 00:50:58.686221 systemd[1]: Created slice system-modprobe.slice. May 17 00:50:58.686231 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:50:58.686240 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:50:58.686250 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:50:58.686259 systemd[1]: Created slice user.slice. May 17 00:50:58.686268 systemd[1]: Started systemd-ask-password-console.path. May 17 00:50:58.686279 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:50:58.686289 systemd[1]: Set up automount boot.automount. May 17 00:50:58.686298 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:50:58.686307 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:50:58.686317 systemd[1]: Stopped target initrd-fs.target. May 17 00:50:58.686326 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:50:58.686336 systemd[1]: Reached target integritysetup.target. May 17 00:50:58.686345 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:50:58.686356 systemd[1]: Reached target remote-fs.target. May 17 00:50:58.686365 systemd[1]: Reached target slices.target. May 17 00:50:58.686374 systemd[1]: Reached target swap.target. May 17 00:50:58.686383 systemd[1]: Reached target torcx.target. May 17 00:50:58.686393 systemd[1]: Reached target veritysetup.target. May 17 00:50:58.686402 systemd[1]: Listening on systemd-coredump.socket. May 17 00:50:58.686425 systemd[1]: Listening on systemd-initctl.socket. May 17 00:50:58.686435 systemd[1]: Listening on systemd-networkd.socket. May 17 00:50:58.686444 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:50:58.686454 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:50:58.686463 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:50:58.686473 systemd[1]: Mounting dev-hugepages.mount... May 17 00:50:58.686482 systemd[1]: Mounting dev-mqueue.mount... May 17 00:50:58.686493 systemd[1]: Mounting media.mount... May 17 00:50:58.686504 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:50:58.686514 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:50:58.686524 systemd[1]: Mounting tmp.mount... May 17 00:50:58.686533 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:50:58.686543 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:50:58.686553 systemd[1]: Starting kmod-static-nodes.service... May 17 00:50:58.686562 systemd[1]: Starting modprobe@configfs.service... May 17 00:50:58.686572 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:50:58.686581 systemd[1]: Starting modprobe@drm.service... May 17 00:50:58.686592 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:50:58.686602 systemd[1]: Starting modprobe@fuse.service... May 17 00:50:58.686612 systemd[1]: Starting modprobe@loop.service... May 17 00:50:58.686621 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:50:58.686631 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:50:58.686640 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:50:58.686650 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:50:58.686659 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:50:58.686670 systemd[1]: Stopped systemd-journald.service. May 17 00:50:58.686680 systemd[1]: systemd-journald.service: Consumed 2.811s CPU time. May 17 00:50:58.686690 systemd[1]: Starting systemd-journald.service... May 17 00:50:58.686701 kernel: loop: module loaded May 17 00:50:58.686710 systemd[1]: Starting systemd-modules-load.service... May 17 00:50:58.686720 systemd[1]: Starting systemd-network-generator.service... May 17 00:50:58.686729 kernel: fuse: init (API version 7.34) May 17 00:50:58.686738 systemd[1]: Starting systemd-remount-fs.service... May 17 00:50:58.686747 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:50:58.686758 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:50:58.686768 systemd[1]: Stopped verity-setup.service. May 17 00:50:58.686778 systemd[1]: Mounted dev-hugepages.mount. May 17 00:50:58.686787 systemd[1]: Mounted dev-mqueue.mount. May 17 00:50:58.686797 systemd[1]: Mounted media.mount. May 17 00:50:58.686810 systemd-journald[1180]: Journal started May 17 00:50:58.686847 systemd-journald[1180]: Runtime Journal (/run/log/journal/6e1ba985f9324d16a93a9bcb24dc8bea) is 8.0M, max 78.5M, 70.5M free. May 17 00:50:48.696000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:50:50.134000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:50:50.138000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:50:50.152000 audit: BPF prog-id=10 op=LOAD May 17 00:50:50.152000 audit: BPF prog-id=10 op=UNLOAD May 17 00:50:50.169000 audit: BPF prog-id=11 op=LOAD May 17 00:50:50.169000 audit: BPF prog-id=11 op=UNLOAD May 17 00:50:51.326000 audit[1074]: AVC avc: denied { associate } for pid=1074 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:50:51.326000 audit[1074]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222fc a1=40000283d8 a2=4000026840 a3=32 items=0 ppid=1057 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:51.326000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:50:51.336000 audit[1074]: AVC avc: denied { associate } for pid=1074 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:50:51.336000 audit[1074]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000223d5 a2=1ed a3=0 items=2 ppid=1057 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:51.336000 audit: CWD cwd="/" May 17 00:50:51.336000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:51.336000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:50:51.336000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:50:58.030000 audit: BPF prog-id=12 op=LOAD May 17 00:50:58.030000 audit: BPF prog-id=3 op=UNLOAD May 17 00:50:58.030000 audit: BPF prog-id=13 op=LOAD May 17 00:50:58.030000 audit: BPF prog-id=14 op=LOAD May 17 00:50:58.030000 audit: BPF prog-id=4 op=UNLOAD May 17 00:50:58.030000 audit: BPF prog-id=5 op=UNLOAD May 17 00:50:58.036000 audit: BPF prog-id=15 op=LOAD May 17 00:50:58.036000 audit: BPF prog-id=12 op=UNLOAD May 17 00:50:58.041000 audit: BPF prog-id=16 op=LOAD May 17 00:50:58.047000 audit: BPF prog-id=17 op=LOAD May 17 00:50:58.047000 audit: BPF prog-id=13 op=UNLOAD May 17 00:50:58.047000 audit: BPF prog-id=14 op=UNLOAD May 17 00:50:58.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.067000 audit: BPF prog-id=15 op=UNLOAD May 17 00:50:58.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.583000 audit: BPF prog-id=18 op=LOAD May 17 00:50:58.583000 audit: BPF prog-id=19 op=LOAD May 17 00:50:58.583000 audit: BPF prog-id=20 op=LOAD May 17 00:50:58.583000 audit: BPF prog-id=16 op=UNLOAD May 17 00:50:58.583000 audit: BPF prog-id=17 op=UNLOAD May 17 00:50:58.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.683000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:50:58.683000 audit[1180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffff6192c0 a2=4000 a3=1 items=0 ppid=1 pid=1180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:50:58.683000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:50:51.259136 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:50:58.029626 systemd[1]: Queued start job for default target multi-user.target. May 17 00:50:51.259426 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:50:58.029637 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:50:51.259445 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:50:58.048310 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:50:51.259483 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:50:58.048755 systemd[1]: systemd-journald.service: Consumed 2.811s CPU time. May 17 00:50:51.259492 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:50:51.259522 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:50:51.259533 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:50:51.259722 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:50:51.259752 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:50:51.259764 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:50:51.311141 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:50:51.311184 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:50:51.311206 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:50:51.311221 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:50:51.311242 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:50:51.311255 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:50:57.106730 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:50:57.106982 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:50:57.107077 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:50:57.107232 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:50:57.107281 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:50:57.107332 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2025-05-17T00:50:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:50:58.698541 systemd[1]: Started systemd-journald.service. May 17 00:50:58.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.699257 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:50:58.703591 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:50:58.707924 systemd[1]: Mounted tmp.mount. May 17 00:50:58.711477 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:50:58.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.716116 systemd[1]: Finished kmod-static-nodes.service. May 17 00:50:58.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.720804 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:50:58.720922 systemd[1]: Finished modprobe@configfs.service. May 17 00:50:58.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.725385 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:50:58.725598 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:50:58.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.730124 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:50:58.730237 systemd[1]: Finished modprobe@drm.service. May 17 00:50:58.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.734526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:50:58.734632 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:50:58.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.739513 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:50:58.739625 systemd[1]: Finished modprobe@fuse.service. May 17 00:50:58.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.743986 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:50:58.744097 systemd[1]: Finished modprobe@loop.service. May 17 00:50:58.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.748505 systemd[1]: Finished systemd-modules-load.service. May 17 00:50:58.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.753349 systemd[1]: Finished systemd-network-generator.service. May 17 00:50:58.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.758579 systemd[1]: Finished systemd-remount-fs.service. May 17 00:50:58.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.763478 systemd[1]: Reached target network-pre.target. May 17 00:50:58.768723 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:50:58.774370 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:50:58.778165 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:50:58.808972 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:50:58.813953 systemd[1]: Starting systemd-journal-flush.service... May 17 00:50:58.818082 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:50:58.819011 systemd[1]: Starting systemd-random-seed.service... May 17 00:50:58.823112 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:50:58.824084 systemd[1]: Starting systemd-sysctl.service... May 17 00:50:58.828847 systemd[1]: Starting systemd-sysusers.service... May 17 00:50:58.834481 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:50:58.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.839645 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:50:58.844304 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:50:58.849938 systemd[1]: Starting systemd-udev-settle.service... May 17 00:50:58.860062 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:50:58.864479 systemd[1]: Finished systemd-random-seed.service. May 17 00:50:58.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.872251 systemd[1]: Reached target first-boot-complete.target. May 17 00:50:58.877656 systemd-journald[1180]: Time spent on flushing to /var/log/journal/6e1ba985f9324d16a93a9bcb24dc8bea is 13.168ms for 1074 entries. May 17 00:50:58.877656 systemd-journald[1180]: System Journal (/var/log/journal/6e1ba985f9324d16a93a9bcb24dc8bea) is 8.0M, max 2.6G, 2.6G free. May 17 00:50:58.948300 systemd-journald[1180]: Received client request to flush runtime journal. May 17 00:50:58.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:58.915272 systemd[1]: Finished systemd-sysctl.service. May 17 00:50:58.949235 systemd[1]: Finished systemd-journal-flush.service. May 17 00:50:58.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:50:59.428807 systemd[1]: Finished systemd-sysusers.service. May 17 00:50:59.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:00.037466 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:51:00.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:00.042000 audit: BPF prog-id=21 op=LOAD May 17 00:51:00.042000 audit: BPF prog-id=22 op=LOAD May 17 00:51:00.042000 audit: BPF prog-id=7 op=UNLOAD May 17 00:51:00.042000 audit: BPF prog-id=8 op=UNLOAD May 17 00:51:00.043594 systemd[1]: Starting systemd-udevd.service... May 17 00:51:00.061313 systemd-udevd[1197]: Using default interface naming scheme 'v252'. May 17 00:51:00.245336 systemd[1]: Started systemd-udevd.service. May 17 00:51:00.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:00.259000 audit: BPF prog-id=23 op=LOAD May 17 00:51:00.260786 systemd[1]: Starting systemd-networkd.service... May 17 00:51:00.282617 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 17 00:51:00.337000 audit: BPF prog-id=24 op=LOAD May 17 00:51:00.338000 audit: BPF prog-id=25 op=LOAD May 17 00:51:00.338000 audit: BPF prog-id=26 op=LOAD May 17 00:51:00.337000 audit[1215]: AVC avc: denied { confidentiality } for pid=1215 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:51:00.349432 kernel: hv_vmbus: registering driver hv_balloon May 17 00:51:00.349497 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:51:00.349947 systemd[1]: Starting systemd-userdbd.service... May 17 00:51:00.361433 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:51:00.361484 kernel: hv_balloon: Memory hot add disabled on ARM64 May 17 00:51:00.337000 audit[1215]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac8e5f190 a1=aa2c a2=ffffbcf224b0 a3=aaaac8db8010 items=12 ppid=1197 pid=1215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:00.337000 audit: CWD cwd="/" May 17 00:51:00.337000 audit: PATH item=0 name=(null) inode=6646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=1 name=(null) inode=11404 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=2 name=(null) inode=11404 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=3 name=(null) inode=11405 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=4 name=(null) inode=11404 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=5 name=(null) inode=11406 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=6 name=(null) inode=11404 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=7 name=(null) inode=11407 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=8 name=(null) inode=11404 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=9 name=(null) inode=11408 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=10 name=(null) inode=11404 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PATH item=11 name=(null) inode=11409 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:51:00.337000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:51:00.386447 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:51:00.399289 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:51:00.399362 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:51:00.412369 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:51:00.412440 kernel: Console: switching to colour dummy device 80x25 May 17 00:51:00.412458 kernel: hv_vmbus: registering driver hv_utils May 17 00:51:00.420624 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:51:00.420709 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:51:00.420724 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:51:00.676112 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:51:00.685915 systemd[1]: Started systemd-userdbd.service. May 17 00:51:00.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:00.903115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:51:00.913444 systemd[1]: Finished systemd-udev-settle.service. May 17 00:51:00.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:00.919588 systemd[1]: Starting lvm2-activation-early.service... May 17 00:51:00.945250 systemd-networkd[1218]: lo: Link UP May 17 00:51:00.945479 systemd-networkd[1218]: lo: Gained carrier May 17 00:51:00.946028 systemd-networkd[1218]: Enumeration completed May 17 00:51:00.946541 systemd[1]: Started systemd-networkd.service. May 17 00:51:00.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:00.953596 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:51:00.976582 systemd-networkd[1218]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:51:01.026110 kernel: mlx5_core ed4b:00:02.0 enP60747s1: Link up May 17 00:51:01.051984 systemd-networkd[1218]: enP60747s1: Link UP May 17 00:51:01.052137 kernel: hv_netvsc 002248b7-2ac9-0022-48b7-2ac9002248b7 eth0: Data path switched to VF: enP60747s1 May 17 00:51:01.052121 systemd-networkd[1218]: eth0: Link UP May 17 00:51:01.052133 systemd-networkd[1218]: eth0: Gained carrier May 17 00:51:01.057341 systemd-networkd[1218]: enP60747s1: Gained carrier May 17 00:51:01.070192 systemd-networkd[1218]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:51:01.217917 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:51:01.241039 systemd[1]: Finished lvm2-activation-early.service. May 17 00:51:01.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.246327 systemd[1]: Reached target cryptsetup.target. May 17 00:51:01.251754 systemd[1]: Starting lvm2-activation.service... May 17 00:51:01.255982 lvm[1277]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:51:01.279027 systemd[1]: Finished lvm2-activation.service. May 17 00:51:01.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.283705 systemd[1]: Reached target local-fs-pre.target. May 17 00:51:01.288318 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:51:01.288350 systemd[1]: Reached target local-fs.target. May 17 00:51:01.292512 systemd[1]: Reached target machines.target. May 17 00:51:01.297929 systemd[1]: Starting ldconfig.service... May 17 00:51:01.301623 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:01.301687 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:01.302720 systemd[1]: Starting systemd-boot-update.service... May 17 00:51:01.307829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:51:01.314237 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:51:01.319714 systemd[1]: Starting systemd-sysext.service... May 17 00:51:01.373840 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1279 (bootctl) May 17 00:51:01.375165 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:51:01.387916 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:51:01.388557 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:51:01.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.394780 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:51:01.402124 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:51:01.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.436248 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:51:01.436429 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:51:01.481102 kernel: loop0: detected capacity change from 0 to 211168 May 17 00:51:01.523130 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:51:01.546110 kernel: loop1: detected capacity change from 0 to 211168 May 17 00:51:01.549078 (sd-sysext)[1291]: Using extensions 'kubernetes'. May 17 00:51:01.550118 (sd-sysext)[1291]: Merged extensions into '/usr'. May 17 00:51:01.566067 systemd[1]: Mounting usr-share-oem.mount... May 17 00:51:01.569793 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:01.571059 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:01.576504 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:01.581826 systemd[1]: Starting modprobe@loop.service... May 17 00:51:01.586170 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:01.586303 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:01.588627 systemd[1]: Mounted usr-share-oem.mount. May 17 00:51:01.593228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:01.593384 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:01.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.598159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:01.598287 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:01.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.603211 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:01.603321 systemd[1]: Finished modprobe@loop.service. May 17 00:51:01.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.608162 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:01.608261 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:01.609387 systemd[1]: Finished systemd-sysext.service. May 17 00:51:01.614031 systemd-fsck[1287]: fsck.fat 4.2 (2021-01-31) May 17 00:51:01.614031 systemd-fsck[1287]: /dev/sda1: 236 files, 117182/258078 clusters May 17 00:51:01.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.616753 systemd[1]: Starting ensure-sysext.service... May 17 00:51:01.622927 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:51:01.630068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:51:01.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.640218 systemd[1]: Mounting boot.mount... May 17 00:51:01.643735 systemd[1]: Reloading. May 17 00:51:01.678173 /usr/lib/systemd/system-generators/torcx-generator[1322]: time="2025-05-17T00:51:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:51:01.682761 /usr/lib/systemd/system-generators/torcx-generator[1322]: time="2025-05-17T00:51:01Z" level=info msg="torcx already run" May 17 00:51:01.769258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:51:01.769276 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:51:01.784990 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:51:01.818323 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:51:01.847000 audit: BPF prog-id=27 op=LOAD May 17 00:51:01.847000 audit: BPF prog-id=28 op=LOAD May 17 00:51:01.847000 audit: BPF prog-id=21 op=UNLOAD May 17 00:51:01.847000 audit: BPF prog-id=22 op=UNLOAD May 17 00:51:01.848000 audit: BPF prog-id=29 op=LOAD May 17 00:51:01.848000 audit: BPF prog-id=24 op=UNLOAD May 17 00:51:01.848000 audit: BPF prog-id=30 op=LOAD May 17 00:51:01.848000 audit: BPF prog-id=31 op=LOAD May 17 00:51:01.848000 audit: BPF prog-id=25 op=UNLOAD May 17 00:51:01.848000 audit: BPF prog-id=26 op=UNLOAD May 17 00:51:01.849000 audit: BPF prog-id=32 op=LOAD May 17 00:51:01.849000 audit: BPF prog-id=18 op=UNLOAD May 17 00:51:01.849000 audit: BPF prog-id=33 op=LOAD May 17 00:51:01.849000 audit: BPF prog-id=34 op=LOAD May 17 00:51:01.849000 audit: BPF prog-id=19 op=UNLOAD May 17 00:51:01.849000 audit: BPF prog-id=20 op=UNLOAD May 17 00:51:01.850000 audit: BPF prog-id=35 op=LOAD May 17 00:51:01.850000 audit: BPF prog-id=23 op=UNLOAD May 17 00:51:01.854757 systemd[1]: Mounted boot.mount. May 17 00:51:01.866048 systemd[1]: Finished systemd-boot-update.service. May 17 00:51:01.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.873644 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:01.874824 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:01.879795 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:01.884962 systemd[1]: Starting modprobe@loop.service... May 17 00:51:01.888706 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:01.888875 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:01.889771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:01.889909 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:01.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.894686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:01.894801 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:01.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.899522 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:01.899631 systemd[1]: Finished modprobe@loop.service. May 17 00:51:01.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.905343 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:01.906541 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:01.911407 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:01.916310 systemd[1]: Starting modprobe@loop.service... May 17 00:51:01.920216 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:01.920340 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:01.920418 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:51:01.921073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:01.921214 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:01.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.925685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:01.925794 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:01.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.930696 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:01.930810 systemd[1]: Finished modprobe@loop.service. May 17 00:51:01.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.935058 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:01.935156 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:01.937389 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:51:01.938703 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:51:01.943301 systemd[1]: Starting modprobe@drm.service... May 17 00:51:01.947878 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:51:01.952961 systemd[1]: Starting modprobe@loop.service... May 17 00:51:01.957723 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:51:01.957842 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:01.958762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:51:01.958891 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:51:01.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.963495 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:51:01.963611 systemd[1]: Finished modprobe@drm.service. May 17 00:51:01.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.968360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:51:01.968476 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:51:01.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.973269 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:51:01.973389 systemd[1]: Finished modprobe@loop.service. May 17 00:51:01.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.977852 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:51:01.977920 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:51:01.978868 systemd[1]: Finished ensure-sysext.service. May 17 00:51:01.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:01.993648 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:51:02.525333 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:51:02.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:02.531740 systemd[1]: Starting audit-rules.service... May 17 00:51:02.536386 systemd[1]: Starting clean-ca-certificates.service... May 17 00:51:02.541722 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:51:02.546000 audit: BPF prog-id=36 op=LOAD May 17 00:51:02.547934 systemd[1]: Starting systemd-resolved.service... May 17 00:51:02.551000 audit: BPF prog-id=37 op=LOAD May 17 00:51:02.553279 systemd[1]: Starting systemd-timesyncd.service... May 17 00:51:02.559261 systemd[1]: Starting systemd-update-utmp.service... May 17 00:51:02.588934 systemd[1]: Finished clean-ca-certificates.service. May 17 00:51:02.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:02.593866 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:51:02.598000 audit[1397]: SYSTEM_BOOT pid=1397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:51:02.602062 systemd[1]: Finished systemd-update-utmp.service. May 17 00:51:02.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:02.649910 systemd[1]: Started systemd-timesyncd.service. May 17 00:51:02.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:02.654894 systemd[1]: Reached target time-set.target. May 17 00:51:02.687786 systemd-resolved[1395]: Positive Trust Anchors: May 17 00:51:02.688070 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:51:02.688188 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:51:02.757290 systemd-resolved[1395]: Using system hostname 'ci-3510.3.7-n-44db7a48ea'. May 17 00:51:02.758905 systemd[1]: Started systemd-resolved.service. May 17 00:51:02.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:02.763351 systemd[1]: Reached target network.target. May 17 00:51:02.767524 systemd[1]: Reached target nss-lookup.target. May 17 00:51:02.772368 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:51:02.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:51:02.854000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:51:02.854000 audit[1413]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff460b6f0 a2=420 a3=0 items=0 ppid=1392 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:51:02.854000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:51:02.855888 augenrules[1413]: No rules May 17 00:51:02.856675 systemd[1]: Finished audit-rules.service. May 17 00:51:02.933218 systemd-networkd[1218]: eth0: Gained IPv6LL May 17 00:51:02.935024 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:51:02.940375 systemd[1]: Reached target network-online.target. May 17 00:51:02.991749 systemd-timesyncd[1396]: Contacted time server 104.167.215.195:123 (0.flatcar.pool.ntp.org). May 17 00:51:02.992117 systemd-timesyncd[1396]: Initial clock synchronization to Sat 2025-05-17 00:51:02.993621 UTC. May 17 00:51:08.433841 ldconfig[1278]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:51:08.442568 systemd[1]: Finished ldconfig.service. May 17 00:51:08.448428 systemd[1]: Starting systemd-update-done.service... May 17 00:51:08.471989 systemd[1]: Finished systemd-update-done.service. May 17 00:51:08.476823 systemd[1]: Reached target sysinit.target. May 17 00:51:08.481167 systemd[1]: Started motdgen.path. May 17 00:51:08.485258 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:51:08.491316 systemd[1]: Started logrotate.timer. May 17 00:51:08.495148 systemd[1]: Started mdadm.timer. May 17 00:51:08.498815 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:51:08.503546 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:51:08.503577 systemd[1]: Reached target paths.target. May 17 00:51:08.507519 systemd[1]: Reached target timers.target. May 17 00:51:08.513127 systemd[1]: Listening on dbus.socket. May 17 00:51:08.518780 systemd[1]: Starting docker.socket... May 17 00:51:08.524365 systemd[1]: Listening on sshd.socket. May 17 00:51:08.528320 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:08.528741 systemd[1]: Listening on docker.socket. May 17 00:51:08.532857 systemd[1]: Reached target sockets.target. May 17 00:51:08.537065 systemd[1]: Reached target basic.target. May 17 00:51:08.541182 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:51:08.541211 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:51:08.542187 systemd[1]: Starting containerd.service... May 17 00:51:08.546726 systemd[1]: Starting dbus.service... May 17 00:51:08.550951 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:51:08.556620 systemd[1]: Starting extend-filesystems.service... May 17 00:51:08.563889 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:51:08.565159 systemd[1]: Starting kubelet.service... May 17 00:51:08.569697 systemd[1]: Starting motdgen.service... May 17 00:51:08.574108 systemd[1]: Started nvidia.service. May 17 00:51:08.579424 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:51:08.584712 systemd[1]: Starting sshd-keygen.service... May 17 00:51:08.590635 systemd[1]: Starting systemd-logind.service... May 17 00:51:08.595036 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:51:08.595109 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:51:08.595506 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:51:08.596598 systemd[1]: Starting update-engine.service... May 17 00:51:08.601183 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:51:08.609093 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:51:08.609965 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:51:08.615069 jq[1440]: true May 17 00:51:08.615507 jq[1423]: false May 17 00:51:08.632286 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:51:08.632446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:51:08.643133 extend-filesystems[1424]: Found loop1 May 17 00:51:08.643133 extend-filesystems[1424]: Found sda May 17 00:51:08.654457 extend-filesystems[1424]: Found sda1 May 17 00:51:08.654457 extend-filesystems[1424]: Found sda2 May 17 00:51:08.654457 extend-filesystems[1424]: Found sda3 May 17 00:51:08.654457 extend-filesystems[1424]: Found usr May 17 00:51:08.654457 extend-filesystems[1424]: Found sda4 May 17 00:51:08.654457 extend-filesystems[1424]: Found sda6 May 17 00:51:08.654457 extend-filesystems[1424]: Found sda7 May 17 00:51:08.654457 extend-filesystems[1424]: Found sda9 May 17 00:51:08.654457 extend-filesystems[1424]: Checking size of /dev/sda9 May 17 00:51:08.668320 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:51:08.733493 jq[1443]: true May 17 00:51:08.733612 env[1446]: time="2025-05-17T00:51:08.730114351Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:51:08.668502 systemd[1]: Finished motdgen.service. May 17 00:51:08.688278 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 17 00:51:08.691490 systemd-logind[1437]: New seat seat0. May 17 00:51:08.752643 extend-filesystems[1424]: Old size kept for /dev/sda9 May 17 00:51:08.752643 extend-filesystems[1424]: Found sr0 May 17 00:51:08.752489 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:51:08.752645 systemd[1]: Finished extend-filesystems.service. May 17 00:51:08.817728 env[1446]: time="2025-05-17T00:51:08.817686157Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:51:08.817957 env[1446]: time="2025-05-17T00:51:08.817940420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:08.821715 env[1446]: time="2025-05-17T00:51:08.821679557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:51:08.821813 env[1446]: time="2025-05-17T00:51:08.821798888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:08.822139 env[1446]: time="2025-05-17T00:51:08.822075113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:51:08.822238 env[1446]: time="2025-05-17T00:51:08.822221926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:51:08.822303 env[1446]: time="2025-05-17T00:51:08.822290012Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:51:08.822355 env[1446]: time="2025-05-17T00:51:08.822342337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:51:08.822493 env[1446]: time="2025-05-17T00:51:08.822477069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:08.822772 env[1446]: time="2025-05-17T00:51:08.822752094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:51:08.822971 env[1446]: time="2025-05-17T00:51:08.822950871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:51:08.823033 env[1446]: time="2025-05-17T00:51:08.823021558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:51:08.823171 env[1446]: time="2025-05-17T00:51:08.823151970Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:51:08.823245 env[1446]: time="2025-05-17T00:51:08.823232297Z" level=info msg="metadata content store policy set" policy=shared May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845110667Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845151991Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845168992Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845205076Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845221957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845236118Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845249880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845582070Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845598951Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845612472Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845625914Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845639515Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:51:08.845882 env[1446]: time="2025-05-17T00:51:08.845764046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.845835652Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846501352Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846531715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846545476Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846589640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846604202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846617843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846628924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846639525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846651406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846663367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846675008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846688129Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846809580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847269 env[1446]: time="2025-05-17T00:51:08.846825342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847623 env[1446]: time="2025-05-17T00:51:08.846837543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:51:08.847623 env[1446]: time="2025-05-17T00:51:08.846848744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:51:08.847623 env[1446]: time="2025-05-17T00:51:08.846861825Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:51:08.847623 env[1446]: time="2025-05-17T00:51:08.846871666Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:51:08.847623 env[1446]: time="2025-05-17T00:51:08.846887827Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:51:08.847623 env[1446]: time="2025-05-17T00:51:08.846952553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:51:08.848908 systemd[1]: Started containerd.service. May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.847901959Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.847966524Z" level=info msg="Connect containerd service" May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.847996087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.848540696Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.848757276Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.848791479Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:51:08.850217 env[1446]: time="2025-05-17T00:51:08.848841523Z" level=info msg="containerd successfully booted in 0.121120s" May 17 00:51:08.866959 env[1446]: time="2025-05-17T00:51:08.858977956Z" level=info msg="Start subscribing containerd event" May 17 00:51:08.866959 env[1446]: time="2025-05-17T00:51:08.859038681Z" level=info msg="Start recovering state" May 17 00:51:08.866959 env[1446]: time="2025-05-17T00:51:08.859367911Z" level=info msg="Start event monitor" May 17 00:51:08.866959 env[1446]: time="2025-05-17T00:51:08.859395114Z" level=info msg="Start snapshots syncer" May 17 00:51:08.866959 env[1446]: time="2025-05-17T00:51:08.859413995Z" level=info msg="Start cni network conf syncer for default" May 17 00:51:08.866959 env[1446]: time="2025-05-17T00:51:08.859439998Z" level=info msg="Start streaming server" May 17 00:51:08.872290 dbus-daemon[1422]: [system] SELinux support is enabled May 17 00:51:08.872468 systemd[1]: Started dbus.service. May 17 00:51:08.879426 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:51:08.879937 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:51:08.879450 systemd[1]: Reached target system-config.target. May 17 00:51:08.887203 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:51:08.887225 systemd[1]: Reached target user-config.target. May 17 00:51:08.893419 systemd[1]: Started systemd-logind.service. May 17 00:51:08.936941 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:51:08.997175 bash[1470]: Updated "/home/core/.ssh/authorized_keys" May 17 00:51:08.998169 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:51:09.462735 update_engine[1439]: I0517 00:51:09.445987 1439 main.cc:92] Flatcar Update Engine starting May 17 00:51:09.478921 systemd[1]: Started kubelet.service. May 17 00:51:09.512973 systemd[1]: Started update-engine.service. May 17 00:51:09.513415 update_engine[1439]: I0517 00:51:09.513312 1439 update_check_scheduler.cc:74] Next update check in 11m33s May 17 00:51:09.519352 systemd[1]: Started locksmithd.service. May 17 00:51:09.897720 kubelet[1524]: E0517 00:51:09.897625 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:51:09.899498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:51:09.899636 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:51:10.720315 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:51:12.294873 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:51:12.316652 systemd[1]: Finished sshd-keygen.service. May 17 00:51:12.322590 systemd[1]: Starting issuegen.service... May 17 00:51:12.327254 systemd[1]: Started waagent.service. May 17 00:51:12.331661 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:51:12.331817 systemd[1]: Finished issuegen.service. May 17 00:51:12.336860 systemd[1]: Starting systemd-user-sessions.service... May 17 00:51:12.374197 systemd[1]: Finished systemd-user-sessions.service. May 17 00:51:12.380401 systemd[1]: Started getty@tty1.service. May 17 00:51:12.386033 systemd[1]: Started serial-getty@ttyAMA0.service. May 17 00:51:12.391060 systemd[1]: Reached target getty.target. May 17 00:51:12.395221 systemd[1]: Reached target multi-user.target. May 17 00:51:12.400823 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:51:12.411535 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:51:12.411689 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:51:12.416940 systemd[1]: Startup finished in 724ms (kernel) + 13.396s (initrd) + 24.129s (userspace) = 38.250s. May 17 00:51:13.014374 login[1548]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 00:51:13.015828 login[1549]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:51:13.055068 systemd[1]: Created slice user-500.slice. May 17 00:51:13.056259 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:51:13.058634 systemd-logind[1437]: New session 2 of user core. May 17 00:51:13.096719 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:51:13.098161 systemd[1]: Starting user@500.service... May 17 00:51:13.127696 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:13.282745 systemd[1552]: Queued start job for default target default.target. May 17 00:51:13.283235 systemd[1552]: Reached target paths.target. May 17 00:51:13.283254 systemd[1552]: Reached target sockets.target. May 17 00:51:13.283265 systemd[1552]: Reached target timers.target. May 17 00:51:13.283275 systemd[1552]: Reached target basic.target. May 17 00:51:13.283316 systemd[1552]: Reached target default.target. May 17 00:51:13.283338 systemd[1552]: Startup finished in 149ms. May 17 00:51:13.283384 systemd[1]: Started user@500.service. May 17 00:51:13.284262 systemd[1]: Started session-2.scope. May 17 00:51:14.014987 login[1548]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:51:14.018398 systemd-logind[1437]: New session 1 of user core. May 17 00:51:14.019210 systemd[1]: Started session-1.scope. May 17 00:51:19.195017 waagent[1546]: 2025-05-17T00:51:19.194897Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:51:19.215136 waagent[1546]: 2025-05-17T00:51:19.215044Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:51:19.219591 waagent[1546]: 2025-05-17T00:51:19.219530Z INFO Daemon Daemon Python: 3.9.16 May 17 00:51:19.224054 waagent[1546]: 2025-05-17T00:51:19.223879Z INFO Daemon Daemon Run daemon May 17 00:51:19.228436 waagent[1546]: 2025-05-17T00:51:19.228377Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:51:19.244800 waagent[1546]: 2025-05-17T00:51:19.244671Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:51:19.259720 waagent[1546]: 2025-05-17T00:51:19.259594Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:51:19.269334 waagent[1546]: 2025-05-17T00:51:19.269262Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:51:19.274569 waagent[1546]: 2025-05-17T00:51:19.274505Z INFO Daemon Daemon Using waagent for provisioning May 17 00:51:19.280348 waagent[1546]: 2025-05-17T00:51:19.280288Z INFO Daemon Daemon Activate resource disk May 17 00:51:19.285021 waagent[1546]: 2025-05-17T00:51:19.284964Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:51:19.298868 waagent[1546]: 2025-05-17T00:51:19.298805Z INFO Daemon Daemon Found device: None May 17 00:51:19.303433 waagent[1546]: 2025-05-17T00:51:19.303374Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:51:19.312067 waagent[1546]: 2025-05-17T00:51:19.312008Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:51:19.324509 waagent[1546]: 2025-05-17T00:51:19.324445Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:51:19.331304 waagent[1546]: 2025-05-17T00:51:19.331244Z INFO Daemon Daemon Running default provisioning handler May 17 00:51:19.344388 waagent[1546]: 2025-05-17T00:51:19.344266Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:51:19.358737 waagent[1546]: 2025-05-17T00:51:19.358607Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:51:19.368396 waagent[1546]: 2025-05-17T00:51:19.368327Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:51:19.373240 waagent[1546]: 2025-05-17T00:51:19.373181Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:51:19.478981 waagent[1546]: 2025-05-17T00:51:19.478786Z INFO Daemon Daemon Successfully mounted dvd May 17 00:51:19.555822 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:51:19.591480 waagent[1546]: 2025-05-17T00:51:19.591348Z INFO Daemon Daemon Detect protocol endpoint May 17 00:51:19.596849 waagent[1546]: 2025-05-17T00:51:19.596750Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:51:19.602763 waagent[1546]: 2025-05-17T00:51:19.602696Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:51:19.609302 waagent[1546]: 2025-05-17T00:51:19.609240Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:51:19.614897 waagent[1546]: 2025-05-17T00:51:19.614839Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:51:19.619951 waagent[1546]: 2025-05-17T00:51:19.619895Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:51:19.763988 waagent[1546]: 2025-05-17T00:51:19.763870Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:51:19.771525 waagent[1546]: 2025-05-17T00:51:19.771478Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:51:19.777403 waagent[1546]: 2025-05-17T00:51:19.777325Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:51:20.150356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:51:20.150529 systemd[1]: Stopped kubelet.service. May 17 00:51:20.151879 systemd[1]: Starting kubelet.service... May 17 00:51:20.254661 systemd[1]: Started kubelet.service. May 17 00:51:20.348815 kubelet[1592]: E0517 00:51:20.348780 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:51:20.351467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:51:20.351597 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:51:20.798808 waagent[1546]: 2025-05-17T00:51:20.798664Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:51:20.815152 waagent[1546]: 2025-05-17T00:51:20.815077Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:51:20.820647 waagent[1546]: 2025-05-17T00:51:20.820590Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:51:20.938613 waagent[1546]: 2025-05-17T00:51:20.938495Z INFO Daemon Daemon Found private key matching thumbprint 6C5557994A1352BDCD0C782CD6386FE2FD3EA7BA May 17 00:51:20.946801 waagent[1546]: 2025-05-17T00:51:20.946726Z INFO Daemon Daemon Certificate with thumbprint 8CD7203E2EFC7E7BD17BABD868A63535C1B7F2BC has no matching private key. May 17 00:51:20.956064 waagent[1546]: 2025-05-17T00:51:20.956000Z INFO Daemon Daemon Fetch goal state completed May 17 00:51:20.989266 waagent[1546]: 2025-05-17T00:51:20.989213Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 771c3973-91df-48a0-bb71-75e6be299359 New eTag: 5880490296872575518] May 17 00:51:20.999898 waagent[1546]: 2025-05-17T00:51:20.999833Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:51:21.017801 waagent[1546]: 2025-05-17T00:51:21.017743Z INFO Daemon Daemon Starting provisioning May 17 00:51:21.022891 waagent[1546]: 2025-05-17T00:51:21.022828Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:51:21.027654 waagent[1546]: 2025-05-17T00:51:21.027598Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-44db7a48ea] May 17 00:51:21.078687 waagent[1546]: 2025-05-17T00:51:21.078561Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-44db7a48ea] May 17 00:51:21.086197 waagent[1546]: 2025-05-17T00:51:21.086105Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:51:21.092820 waagent[1546]: 2025-05-17T00:51:21.092753Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:51:21.109047 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:51:21.109223 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:51:21.109279 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:51:21.109507 systemd[1]: Stopping systemd-networkd.service... May 17 00:51:21.116130 systemd-networkd[1218]: eth0: DHCPv6 lease lost May 17 00:51:21.117434 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:51:21.117606 systemd[1]: Stopped systemd-networkd.service. May 17 00:51:21.119454 systemd[1]: Starting systemd-networkd.service... May 17 00:51:21.146134 systemd-networkd[1608]: enP60747s1: Link UP May 17 00:51:21.146142 systemd-networkd[1608]: enP60747s1: Gained carrier May 17 00:51:21.146982 systemd-networkd[1608]: eth0: Link UP May 17 00:51:21.146992 systemd-networkd[1608]: eth0: Gained carrier May 17 00:51:21.147558 systemd-networkd[1608]: lo: Link UP May 17 00:51:21.147568 systemd-networkd[1608]: lo: Gained carrier May 17 00:51:21.147789 systemd-networkd[1608]: eth0: Gained IPv6LL May 17 00:51:21.148863 systemd-networkd[1608]: Enumeration completed May 17 00:51:21.148973 systemd[1]: Started systemd-networkd.service. May 17 00:51:21.150582 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:51:21.150595 systemd-networkd[1608]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:51:21.152491 waagent[1546]: 2025-05-17T00:51:21.152288Z INFO Daemon Daemon Create user account if not exists May 17 00:51:21.160344 waagent[1546]: 2025-05-17T00:51:21.160260Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:51:21.166070 waagent[1546]: 2025-05-17T00:51:21.165996Z INFO Daemon Daemon Configure sudoer May 17 00:51:21.171026 waagent[1546]: 2025-05-17T00:51:21.170960Z INFO Daemon Daemon Configure sshd May 17 00:51:21.172141 systemd-networkd[1608]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:51:21.176721 waagent[1546]: 2025-05-17T00:51:21.176150Z INFO Daemon Daemon Deploy ssh public key. May 17 00:51:21.176444 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:51:22.346071 waagent[1546]: 2025-05-17T00:51:22.345978Z INFO Daemon Daemon Provisioning complete May 17 00:51:22.364963 waagent[1546]: 2025-05-17T00:51:22.364898Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:51:22.370911 waagent[1546]: 2025-05-17T00:51:22.370845Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:51:22.381180 waagent[1546]: 2025-05-17T00:51:22.381117Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:51:22.684294 waagent[1617]: 2025-05-17T00:51:22.684145Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:51:22.685379 waagent[1617]: 2025-05-17T00:51:22.685323Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:51:22.685616 waagent[1617]: 2025-05-17T00:51:22.685568Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:51:22.701528 waagent[1617]: 2025-05-17T00:51:22.701445Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:51:22.701839 waagent[1617]: 2025-05-17T00:51:22.701791Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:51:22.773885 waagent[1617]: 2025-05-17T00:51:22.773755Z INFO ExtHandler ExtHandler Found private key matching thumbprint 6C5557994A1352BDCD0C782CD6386FE2FD3EA7BA May 17 00:51:22.774272 waagent[1617]: 2025-05-17T00:51:22.774217Z INFO ExtHandler ExtHandler Certificate with thumbprint 8CD7203E2EFC7E7BD17BABD868A63535C1B7F2BC has no matching private key. May 17 00:51:22.774595 waagent[1617]: 2025-05-17T00:51:22.774545Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:51:22.793949 waagent[1617]: 2025-05-17T00:51:22.793892Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 1d917937-94d2-4e52-87f4-bb6142cf5e44 New eTag: 5880490296872575518] May 17 00:51:22.794666 waagent[1617]: 2025-05-17T00:51:22.794610Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:51:22.865009 waagent[1617]: 2025-05-17T00:51:22.864875Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:51:22.902705 waagent[1617]: 2025-05-17T00:51:22.902611Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1617 May 17 00:51:22.906527 waagent[1617]: 2025-05-17T00:51:22.906462Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:51:22.907858 waagent[1617]: 2025-05-17T00:51:22.907800Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:51:23.024783 waagent[1617]: 2025-05-17T00:51:23.024669Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:51:23.025235 waagent[1617]: 2025-05-17T00:51:23.025071Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:51:23.033411 waagent[1617]: 2025-05-17T00:51:23.033350Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:51:23.033912 waagent[1617]: 2025-05-17T00:51:23.033850Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:51:23.035072 waagent[1617]: 2025-05-17T00:51:23.035006Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:51:23.036567 waagent[1617]: 2025-05-17T00:51:23.036495Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:51:23.037241 waagent[1617]: 2025-05-17T00:51:23.037178Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:51:23.037502 waagent[1617]: 2025-05-17T00:51:23.037451Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:51:23.038158 waagent[1617]: 2025-05-17T00:51:23.038079Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:51:23.038544 waagent[1617]: 2025-05-17T00:51:23.038490Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:51:23.038544 waagent[1617]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:51:23.038544 waagent[1617]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:51:23.038544 waagent[1617]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:51:23.038544 waagent[1617]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:51:23.038544 waagent[1617]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:51:23.038544 waagent[1617]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:51:23.040916 waagent[1617]: 2025-05-17T00:51:23.040753Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:51:23.041638 waagent[1617]: 2025-05-17T00:51:23.041553Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:51:23.041751 waagent[1617]: 2025-05-17T00:51:23.041677Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:51:23.042693 waagent[1617]: 2025-05-17T00:51:23.042618Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:51:23.042793 waagent[1617]: 2025-05-17T00:51:23.042726Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:51:23.043337 waagent[1617]: 2025-05-17T00:51:23.043270Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:51:23.043452 waagent[1617]: 2025-05-17T00:51:23.043354Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:51:23.044929 waagent[1617]: 2025-05-17T00:51:23.044875Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:51:23.045581 waagent[1617]: 2025-05-17T00:51:23.045514Z INFO EnvHandler ExtHandler Configure routes May 17 00:51:23.047062 waagent[1617]: 2025-05-17T00:51:23.047015Z INFO EnvHandler ExtHandler Gateway:None May 17 00:51:23.047338 waagent[1617]: 2025-05-17T00:51:23.047286Z INFO EnvHandler ExtHandler Routes:None May 17 00:51:23.053942 waagent[1617]: 2025-05-17T00:51:23.053864Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:51:23.056571 waagent[1617]: 2025-05-17T00:51:23.056506Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:51:23.059439 waagent[1617]: 2025-05-17T00:51:23.059366Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:51:23.082611 waagent[1617]: 2025-05-17T00:51:23.082488Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1608' May 17 00:51:23.108127 waagent[1617]: 2025-05-17T00:51:23.108036Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:51:23.177231 waagent[1617]: 2025-05-17T00:51:23.177041Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:51:23.177231 waagent[1617]: Executing ['ip', '-a', '-o', 'link']: May 17 00:51:23.177231 waagent[1617]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:51:23.177231 waagent[1617]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:2a:c9 brd ff:ff:ff:ff:ff:ff May 17 00:51:23.177231 waagent[1617]: 3: enP60747s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:2a:c9 brd ff:ff:ff:ff:ff:ff\ altname enP60747p0s2 May 17 00:51:23.177231 waagent[1617]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:51:23.177231 waagent[1617]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:51:23.177231 waagent[1617]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:51:23.177231 waagent[1617]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:51:23.177231 waagent[1617]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:51:23.177231 waagent[1617]: 2: eth0 inet6 fe80::222:48ff:feb7:2ac9/64 scope link \ valid_lft forever preferred_lft forever May 17 00:51:23.408506 waagent[1617]: 2025-05-17T00:51:23.408390Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:51:24.385664 waagent[1546]: 2025-05-17T00:51:24.385541Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:51:24.392946 waagent[1546]: 2025-05-17T00:51:24.392895Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:51:25.658190 waagent[1649]: 2025-05-17T00:51:25.658070Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:51:25.659959 waagent[1649]: 2025-05-17T00:51:25.659899Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:51:25.660224 waagent[1649]: 2025-05-17T00:51:25.660175Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:51:25.660440 waagent[1649]: 2025-05-17T00:51:25.660395Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 17 00:51:25.674295 waagent[1649]: 2025-05-17T00:51:25.674180Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:51:25.674859 waagent[1649]: 2025-05-17T00:51:25.674803Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:51:25.675122 waagent[1649]: 2025-05-17T00:51:25.675050Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:51:25.675444 waagent[1649]: 2025-05-17T00:51:25.675392Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:51:25.697452 waagent[1649]: 2025-05-17T00:51:25.697368Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:51:25.712851 waagent[1649]: 2025-05-17T00:51:25.712792Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 17 00:51:25.714116 waagent[1649]: 2025-05-17T00:51:25.714042Z INFO ExtHandler May 17 00:51:25.714374 waagent[1649]: 2025-05-17T00:51:25.714324Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1fb84853-84c4-46d2-bedf-b8c1fa6a5356 eTag: 5880490296872575518 source: Fabric] May 17 00:51:25.715224 waagent[1649]: 2025-05-17T00:51:25.715165Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:51:25.716552 waagent[1649]: 2025-05-17T00:51:25.716494Z INFO ExtHandler May 17 00:51:25.716779 waagent[1649]: 2025-05-17T00:51:25.716732Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:51:25.723922 waagent[1649]: 2025-05-17T00:51:25.723873Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:51:25.724540 waagent[1649]: 2025-05-17T00:51:25.724488Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:51:25.745364 waagent[1649]: 2025-05-17T00:51:25.745304Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:51:25.818975 waagent[1649]: 2025-05-17T00:51:25.818841Z INFO ExtHandler Downloaded certificate {'thumbprint': '8CD7203E2EFC7E7BD17BABD868A63535C1B7F2BC', 'hasPrivateKey': False} May 17 00:51:25.820247 waagent[1649]: 2025-05-17T00:51:25.820187Z INFO ExtHandler Downloaded certificate {'thumbprint': '6C5557994A1352BDCD0C782CD6386FE2FD3EA7BA', 'hasPrivateKey': True} May 17 00:51:25.821406 waagent[1649]: 2025-05-17T00:51:25.821349Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:51:25.822391 waagent[1649]: 2025-05-17T00:51:25.822333Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:51:25.841991 waagent[1649]: 2025-05-17T00:51:25.841887Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:51:25.851313 waagent[1649]: 2025-05-17T00:51:25.851216Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:51:25.855037 waagent[1649]: 2025-05-17T00:51:25.854944Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:51:25.855377 waagent[1649]: 2025-05-17T00:51:25.855326Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:51:26.009205 waagent[1649]: 2025-05-17T00:51:26.009012Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:51:26.009205 waagent[1649]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:51:26.009205 waagent[1649]: pkts bytes target prot opt in out source destination May 17 00:51:26.009205 waagent[1649]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:51:26.009205 waagent[1649]: pkts bytes target prot opt in out source destination May 17 00:51:26.009205 waagent[1649]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:51:26.009205 waagent[1649]: pkts bytes target prot opt in out source destination May 17 00:51:26.009205 waagent[1649]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:51:26.009205 waagent[1649]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:51:26.009205 waagent[1649]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:51:26.010575 waagent[1649]: 2025-05-17T00:51:26.010517Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:51:26.013443 waagent[1649]: 2025-05-17T00:51:26.013331Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:51:26.013806 waagent[1649]: 2025-05-17T00:51:26.013754Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:51:26.014272 waagent[1649]: 2025-05-17T00:51:26.014216Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:51:26.021858 waagent[1649]: 2025-05-17T00:51:26.021797Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:51:26.022383 waagent[1649]: 2025-05-17T00:51:26.022323Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:51:26.030603 waagent[1649]: 2025-05-17T00:51:26.030538Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1649 May 17 00:51:26.033896 waagent[1649]: 2025-05-17T00:51:26.033833Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:51:26.034756 waagent[1649]: 2025-05-17T00:51:26.034697Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:51:26.035655 waagent[1649]: 2025-05-17T00:51:26.035598Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:51:26.038544 waagent[1649]: 2025-05-17T00:51:26.038480Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:51:26.039902 waagent[1649]: 2025-05-17T00:51:26.039828Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:51:26.040721 waagent[1649]: 2025-05-17T00:51:26.040660Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:51:26.040983 waagent[1649]: 2025-05-17T00:51:26.040934Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:51:26.041591 waagent[1649]: 2025-05-17T00:51:26.041520Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:51:26.042167 waagent[1649]: 2025-05-17T00:51:26.042077Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:51:26.042745 waagent[1649]: 2025-05-17T00:51:26.042564Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:51:26.042817 waagent[1649]: 2025-05-17T00:51:26.042742Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:51:26.043195 waagent[1649]: 2025-05-17T00:51:26.043125Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:51:26.043773 waagent[1649]: 2025-05-17T00:51:26.043699Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:51:26.043773 waagent[1649]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:51:26.043773 waagent[1649]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:51:26.043773 waagent[1649]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:51:26.043773 waagent[1649]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:51:26.043773 waagent[1649]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:51:26.043773 waagent[1649]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:51:26.044333 waagent[1649]: 2025-05-17T00:51:26.044268Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:51:26.045070 waagent[1649]: 2025-05-17T00:51:26.044985Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:51:26.045351 waagent[1649]: 2025-05-17T00:51:26.045284Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:51:26.049025 waagent[1649]: 2025-05-17T00:51:26.048948Z INFO EnvHandler ExtHandler Configure routes May 17 00:51:26.049623 waagent[1649]: 2025-05-17T00:51:26.049561Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:51:26.053612 waagent[1649]: 2025-05-17T00:51:26.053526Z INFO EnvHandler ExtHandler Gateway:None May 17 00:51:26.054723 waagent[1649]: 2025-05-17T00:51:26.054655Z INFO EnvHandler ExtHandler Routes:None May 17 00:51:26.062522 waagent[1649]: 2025-05-17T00:51:26.062453Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:51:26.062522 waagent[1649]: Executing ['ip', '-a', '-o', 'link']: May 17 00:51:26.062522 waagent[1649]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:51:26.062522 waagent[1649]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:2a:c9 brd ff:ff:ff:ff:ff:ff May 17 00:51:26.062522 waagent[1649]: 3: enP60747s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:2a:c9 brd ff:ff:ff:ff:ff:ff\ altname enP60747p0s2 May 17 00:51:26.062522 waagent[1649]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:51:26.062522 waagent[1649]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:51:26.062522 waagent[1649]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:51:26.062522 waagent[1649]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:51:26.062522 waagent[1649]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:51:26.062522 waagent[1649]: 2: eth0 inet6 fe80::222:48ff:feb7:2ac9/64 scope link \ valid_lft forever preferred_lft forever May 17 00:51:26.068778 waagent[1649]: 2025-05-17T00:51:26.068385Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:51:26.087669 waagent[1649]: 2025-05-17T00:51:26.087566Z INFO ExtHandler ExtHandler May 17 00:51:26.088212 waagent[1649]: 2025-05-17T00:51:26.088140Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 28107b2d-094a-439e-8199-e88a8b0b0f12 correlation 9dfa2b90-e065-47cc-bfd2-7b492964f9c2 created: 2025-05-17T00:49:50.343246Z] May 17 00:51:26.092332 waagent[1649]: 2025-05-17T00:51:26.092256Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:51:26.097981 waagent[1649]: 2025-05-17T00:51:26.097914Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] May 17 00:51:26.124044 waagent[1649]: 2025-05-17T00:51:26.123920Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:51:26.128361 waagent[1649]: 2025-05-17T00:51:26.128192Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:51:26.133584 waagent[1649]: 2025-05-17T00:51:26.133426Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 08CD8C03-D18E-49D7-9BEB-AF2860F41459;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:51:26.139587 waagent[1649]: 2025-05-17T00:51:26.139521Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:51:30.376981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:51:30.377182 systemd[1]: Stopped kubelet.service. May 17 00:51:30.378549 systemd[1]: Starting kubelet.service... May 17 00:51:30.581589 systemd[1]: Started kubelet.service. May 17 00:51:30.616456 kubelet[1696]: E0517 00:51:30.616421 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:51:30.618873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:51:30.618992 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:51:37.035253 systemd[1]: Created slice system-sshd.slice. May 17 00:51:37.037389 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:43232.service. May 17 00:51:37.671162 sshd[1702]: Accepted publickey for core from 10.200.16.10 port 43232 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:37.688579 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:37.692562 systemd[1]: Started session-3.scope. May 17 00:51:37.692846 systemd-logind[1437]: New session 3 of user core. May 17 00:51:38.077557 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:43236.service. May 17 00:51:38.532850 sshd[1707]: Accepted publickey for core from 10.200.16.10 port 43236 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:38.534458 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:38.538004 systemd-logind[1437]: New session 4 of user core. May 17 00:51:38.538439 systemd[1]: Started session-4.scope. May 17 00:51:38.881191 sshd[1707]: pam_unix(sshd:session): session closed for user core May 17 00:51:38.883802 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:43236.service: Deactivated successfully. May 17 00:51:38.884571 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:51:38.885119 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. May 17 00:51:38.886030 systemd-logind[1437]: Removed session 4. May 17 00:51:38.955020 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:33382.service. May 17 00:51:39.405065 sshd[1713]: Accepted publickey for core from 10.200.16.10 port 33382 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:39.406589 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:39.410629 systemd[1]: Started session-5.scope. May 17 00:51:39.411932 systemd-logind[1437]: New session 5 of user core. May 17 00:51:39.738945 sshd[1713]: pam_unix(sshd:session): session closed for user core May 17 00:51:39.741212 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:33382.service: Deactivated successfully. May 17 00:51:39.741847 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:51:39.742342 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. May 17 00:51:39.743133 systemd-logind[1437]: Removed session 5. May 17 00:51:39.817988 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:33390.service. May 17 00:51:40.301761 sshd[1719]: Accepted publickey for core from 10.200.16.10 port 33390 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:40.303025 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:40.306882 systemd-logind[1437]: New session 6 of user core. May 17 00:51:40.307338 systemd[1]: Started session-6.scope. May 17 00:51:40.626967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:51:40.627162 systemd[1]: Stopped kubelet.service. May 17 00:51:40.628498 systemd[1]: Starting kubelet.service... May 17 00:51:40.648229 sshd[1719]: pam_unix(sshd:session): session closed for user core May 17 00:51:40.651418 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:33390.service: Deactivated successfully. May 17 00:51:40.652168 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:51:40.652680 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. May 17 00:51:40.653523 systemd-logind[1437]: Removed session 6. May 17 00:51:40.727538 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:33394.service. May 17 00:51:40.910036 systemd[1]: Started kubelet.service. May 17 00:51:40.943228 kubelet[1731]: E0517 00:51:40.943187 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:51:40.945458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:51:40.945583 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:51:41.209572 sshd[1727]: Accepted publickey for core from 10.200.16.10 port 33394 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:41.210629 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:41.214665 systemd[1]: Started session-7.scope. May 17 00:51:41.214968 systemd-logind[1437]: New session 7 of user core. May 17 00:51:41.771122 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:51:41.771344 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:51:41.782712 systemd[1]: Starting coreos-metadata.service... May 17 00:51:41.847974 coreos-metadata[1742]: May 17 00:51:41.847 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:51:41.850975 coreos-metadata[1742]: May 17 00:51:41.850 INFO Fetch successful May 17 00:51:41.851128 coreos-metadata[1742]: May 17 00:51:41.851 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 17 00:51:41.852921 coreos-metadata[1742]: May 17 00:51:41.852 INFO Fetch successful May 17 00:51:41.853281 coreos-metadata[1742]: May 17 00:51:41.853 INFO Fetching http://168.63.129.16/machine/1bead51c-5df2-451a-9526-ffd0e4b9f30b/388b7c7e%2Dedfe%2D465a%2D97f2%2D34f6555071d1.%5Fci%2D3510.3.7%2Dn%2D44db7a48ea?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 17 00:51:41.855123 coreos-metadata[1742]: May 17 00:51:41.855 INFO Fetch successful May 17 00:51:41.888055 coreos-metadata[1742]: May 17 00:51:41.887 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 17 00:51:41.899454 coreos-metadata[1742]: May 17 00:51:41.899 INFO Fetch successful May 17 00:51:41.907878 systemd[1]: Finished coreos-metadata.service. May 17 00:51:42.332966 systemd[1]: Stopped kubelet.service. May 17 00:51:42.335153 systemd[1]: Starting kubelet.service... May 17 00:51:42.376467 systemd[1]: Reloading. May 17 00:51:42.470968 /usr/lib/systemd/system-generators/torcx-generator[1808]: time="2025-05-17T00:51:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:51:42.471370 /usr/lib/systemd/system-generators/torcx-generator[1808]: time="2025-05-17T00:51:42Z" level=info msg="torcx already run" May 17 00:51:42.541005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:51:42.541194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:51:42.556567 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:51:42.664403 systemd[1]: Started kubelet.service. May 17 00:51:42.666977 systemd[1]: Stopping kubelet.service... May 17 00:51:42.667490 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:51:42.667674 systemd[1]: Stopped kubelet.service. May 17 00:51:42.669500 systemd[1]: Starting kubelet.service... May 17 00:51:42.838449 systemd[1]: Started kubelet.service. May 17 00:51:42.997371 kubelet[1863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:51:42.997701 kubelet[1863]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:51:42.997748 kubelet[1863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:51:42.997879 kubelet[1863]: I0517 00:51:42.997849 1863 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:51:44.258328 kubelet[1863]: I0517 00:51:44.258291 1863 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:51:44.258658 kubelet[1863]: I0517 00:51:44.258644 1863 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:51:44.259141 kubelet[1863]: I0517 00:51:44.259123 1863 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:51:44.286033 kubelet[1863]: I0517 00:51:44.285845 1863 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:51:44.297725 kubelet[1863]: E0517 00:51:44.297681 1863 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:51:44.297725 kubelet[1863]: I0517 00:51:44.297723 1863 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:51:44.300581 kubelet[1863]: I0517 00:51:44.300550 1863 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:51:44.302063 kubelet[1863]: I0517 00:51:44.302022 1863 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:51:44.302244 kubelet[1863]: I0517 00:51:44.302063 1863 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:51:44.302347 kubelet[1863]: I0517 00:51:44.302246 1863 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:51:44.302347 kubelet[1863]: I0517 00:51:44.302255 1863 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:51:44.302398 kubelet[1863]: I0517 00:51:44.302367 1863 state_mem.go:36] "Initialized new in-memory state store" May 17 00:51:44.308159 kubelet[1863]: I0517 00:51:44.308127 1863 kubelet.go:480] "Attempting to sync node with API server" May 17 00:51:44.308159 kubelet[1863]: I0517 00:51:44.308153 1863 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:51:44.308298 kubelet[1863]: I0517 00:51:44.308173 1863 kubelet.go:386] "Adding apiserver pod source" May 17 00:51:44.312374 kubelet[1863]: I0517 00:51:44.312348 1863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:51:44.312471 kubelet[1863]: E0517 00:51:44.312355 1863 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:44.312699 kubelet[1863]: E0517 00:51:44.312667 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:44.313378 kubelet[1863]: I0517 00:51:44.313357 1863 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:51:44.314065 kubelet[1863]: I0517 00:51:44.314050 1863 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:51:44.314214 kubelet[1863]: W0517 00:51:44.314203 1863 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:51:44.316435 kubelet[1863]: I0517 00:51:44.316419 1863 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:51:44.316560 kubelet[1863]: I0517 00:51:44.316550 1863 server.go:1289] "Started kubelet" May 17 00:51:44.321908 kubelet[1863]: I0517 00:51:44.321856 1863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:51:44.322309 kubelet[1863]: I0517 00:51:44.322295 1863 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:51:44.322566 kubelet[1863]: I0517 00:51:44.322537 1863 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:51:44.323762 kubelet[1863]: I0517 00:51:44.323738 1863 server.go:317] "Adding debug handlers to kubelet server" May 17 00:51:44.327430 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:51:44.331691 kubelet[1863]: E0517 00:51:44.330775 1863 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.35.18402a3eb9fde7d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.35,UID:10.200.20.35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.35,},FirstTimestamp:2025-05-17 00:51:44.316516313 +0000 UTC m=+1.473729060,LastTimestamp:2025-05-17 00:51:44.316516313 +0000 UTC m=+1.473729060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.35,}" May 17 00:51:44.332775 kubelet[1863]: E0517 00:51:44.332746 1863 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:51:44.332983 kubelet[1863]: E0517 00:51:44.332944 1863 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.200.20.35\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:51:44.333370 kubelet[1863]: I0517 00:51:44.333344 1863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:51:44.333573 kubelet[1863]: E0517 00:51:44.333553 1863 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:51:44.334203 kubelet[1863]: I0517 00:51:44.334185 1863 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:51:44.334978 kubelet[1863]: I0517 00:51:44.334947 1863 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:51:44.335172 kubelet[1863]: E0517 00:51:44.335153 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:44.337301 kubelet[1863]: I0517 00:51:44.337278 1863 factory.go:223] Registration of the systemd container factory successfully May 17 00:51:44.337520 kubelet[1863]: I0517 00:51:44.337500 1863 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:51:44.341384 kubelet[1863]: I0517 00:51:44.338776 1863 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:51:44.342210 kubelet[1863]: I0517 00:51:44.342186 1863 factory.go:223] Registration of the containerd container factory successfully May 17 00:51:44.348598 kubelet[1863]: I0517 00:51:44.348570 1863 reconciler.go:26] "Reconciler: start to sync state" May 17 00:51:44.368570 kubelet[1863]: E0517 00:51:44.368535 1863 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.35\" not found" node="10.200.20.35" May 17 00:51:44.373869 kubelet[1863]: I0517 00:51:44.373852 1863 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:51:44.374018 kubelet[1863]: I0517 00:51:44.374003 1863 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:51:44.374143 kubelet[1863]: I0517 00:51:44.374132 1863 state_mem.go:36] "Initialized new in-memory state store" May 17 00:51:44.378829 kubelet[1863]: I0517 00:51:44.378809 1863 policy_none.go:49] "None policy: Start" May 17 00:51:44.378931 kubelet[1863]: I0517 00:51:44.378920 1863 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:51:44.378985 kubelet[1863]: I0517 00:51:44.378977 1863 state_mem.go:35] "Initializing new in-memory state store" May 17 00:51:44.387287 systemd[1]: Created slice kubepods.slice. May 17 00:51:44.391764 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:51:44.394321 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:51:44.403943 kubelet[1863]: E0517 00:51:44.403912 1863 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:51:44.404093 kubelet[1863]: I0517 00:51:44.404062 1863 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:51:44.404151 kubelet[1863]: I0517 00:51:44.404078 1863 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:51:44.404809 kubelet[1863]: I0517 00:51:44.404750 1863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:51:44.405276 kubelet[1863]: E0517 00:51:44.405232 1863 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:51:44.405349 kubelet[1863]: E0517 00:51:44.405277 1863 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.35\" not found" May 17 00:51:44.418559 kubelet[1863]: I0517 00:51:44.418523 1863 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:51:44.419422 kubelet[1863]: I0517 00:51:44.419398 1863 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:51:44.419422 kubelet[1863]: I0517 00:51:44.419421 1863 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:51:44.419522 kubelet[1863]: I0517 00:51:44.419443 1863 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:51:44.419522 kubelet[1863]: I0517 00:51:44.419451 1863 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:51:44.419522 kubelet[1863]: E0517 00:51:44.419489 1863 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:51:44.505585 kubelet[1863]: I0517 00:51:44.505556 1863 kubelet_node_status.go:75] "Attempting to register node" node="10.200.20.35" May 17 00:51:44.511438 kubelet[1863]: I0517 00:51:44.509938 1863 kubelet_node_status.go:78] "Successfully registered node" node="10.200.20.35" May 17 00:51:44.511438 kubelet[1863]: E0517 00:51:44.511114 1863 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.200.20.35\": node \"10.200.20.35\" not found" May 17 00:51:44.533232 kubelet[1863]: E0517 00:51:44.533203 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:44.634257 kubelet[1863]: E0517 00:51:44.634231 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:44.734914 kubelet[1863]: E0517 00:51:44.734893 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:44.836219 kubelet[1863]: E0517 00:51:44.835829 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:44.888408 sudo[1738]: pam_unix(sudo:session): session closed for user root May 17 00:51:44.936686 kubelet[1863]: E0517 00:51:44.936656 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:44.994297 sshd[1727]: pam_unix(sshd:session): session closed for user core May 17 00:51:44.996273 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:51:44.996854 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. May 17 00:51:44.996962 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:33394.service: Deactivated successfully. May 17 00:51:44.997957 systemd-logind[1437]: Removed session 7. May 17 00:51:45.037181 kubelet[1863]: E0517 00:51:45.037148 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:45.137806 kubelet[1863]: E0517 00:51:45.137783 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:45.238491 kubelet[1863]: E0517 00:51:45.238467 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:45.262067 kubelet[1863]: I0517 00:51:45.262047 1863 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:51:45.262600 kubelet[1863]: I0517 00:51:45.262545 1863 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:51:45.262710 kubelet[1863]: I0517 00:51:45.262694 1863 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:51:45.313427 kubelet[1863]: E0517 00:51:45.313406 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:45.339058 kubelet[1863]: E0517 00:51:45.339033 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:45.439799 kubelet[1863]: E0517 00:51:45.439389 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:45.540515 kubelet[1863]: E0517 00:51:45.540482 1863 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.35\" not found" May 17 00:51:45.642093 kubelet[1863]: I0517 00:51:45.642041 1863 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 17 00:51:45.642448 env[1446]: time="2025-05-17T00:51:45.642364135Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:51:45.642939 kubelet[1863]: I0517 00:51:45.642912 1863 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 17 00:51:46.314328 kubelet[1863]: E0517 00:51:46.314296 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:46.314663 kubelet[1863]: I0517 00:51:46.314357 1863 apiserver.go:52] "Watching apiserver" May 17 00:51:46.326222 systemd[1]: Created slice kubepods-burstable-poda85e6f78_a465_42ee_b60e_df931819be2c.slice. May 17 00:51:46.336733 systemd[1]: Created slice kubepods-besteffort-poddb14e593_5aa9_4ce9_aaef_10705caa2e4e.slice. May 17 00:51:46.342487 kubelet[1863]: I0517 00:51:46.342461 1863 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:51:46.358005 kubelet[1863]: I0517 00:51:46.357978 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-hostproc\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358219 kubelet[1863]: I0517 00:51:46.358200 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-xtables-lock\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358335 kubelet[1863]: I0517 00:51:46.358321 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-kernel\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358436 kubelet[1863]: I0517 00:51:46.358424 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-cgroup\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358536 kubelet[1863]: I0517 00:51:46.358523 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cni-path\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358760 kubelet[1863]: I0517 00:51:46.358742 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-etc-cni-netd\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358856 kubelet[1863]: I0517 00:51:46.358844 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-lib-modules\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358928 kubelet[1863]: I0517 00:51:46.358915 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a85e6f78-a465-42ee-b60e-df931819be2c-clustermesh-secrets\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.358999 kubelet[1863]: I0517 00:51:46.358988 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db14e593-5aa9-4ce9-aaef-10705caa2e4e-lib-modules\") pod \"kube-proxy-zfzcj\" (UID: \"db14e593-5aa9-4ce9-aaef-10705caa2e4e\") " pod="kube-system/kube-proxy-zfzcj" May 17 00:51:46.359073 kubelet[1863]: I0517 00:51:46.359061 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-bpf-maps\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.359170 kubelet[1863]: I0517 00:51:46.359157 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-config-path\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.359250 kubelet[1863]: I0517 00:51:46.359237 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-net\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.359321 kubelet[1863]: I0517 00:51:46.359307 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9csq4\" (UniqueName: \"kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-kube-api-access-9csq4\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.359386 kubelet[1863]: I0517 00:51:46.359375 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db14e593-5aa9-4ce9-aaef-10705caa2e4e-kube-proxy\") pod \"kube-proxy-zfzcj\" (UID: \"db14e593-5aa9-4ce9-aaef-10705caa2e4e\") " pod="kube-system/kube-proxy-zfzcj" May 17 00:51:46.359462 kubelet[1863]: I0517 00:51:46.359451 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-run\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.359601 kubelet[1863]: I0517 00:51:46.359572 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-hubble-tls\") pod \"cilium-26kvm\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " pod="kube-system/cilium-26kvm" May 17 00:51:46.359724 kubelet[1863]: I0517 00:51:46.359711 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db14e593-5aa9-4ce9-aaef-10705caa2e4e-xtables-lock\") pod \"kube-proxy-zfzcj\" (UID: \"db14e593-5aa9-4ce9-aaef-10705caa2e4e\") " pod="kube-system/kube-proxy-zfzcj" May 17 00:51:46.359834 kubelet[1863]: I0517 00:51:46.359821 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd7f5\" (UniqueName: \"kubernetes.io/projected/db14e593-5aa9-4ce9-aaef-10705caa2e4e-kube-api-access-nd7f5\") pod \"kube-proxy-zfzcj\" (UID: \"db14e593-5aa9-4ce9-aaef-10705caa2e4e\") " pod="kube-system/kube-proxy-zfzcj" May 17 00:51:46.461638 kubelet[1863]: I0517 00:51:46.461600 1863 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:51:46.634443 env[1446]: time="2025-05-17T00:51:46.634390977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26kvm,Uid:a85e6f78-a465-42ee-b60e-df931819be2c,Namespace:kube-system,Attempt:0,}" May 17 00:51:46.644786 env[1446]: time="2025-05-17T00:51:46.644514096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfzcj,Uid:db14e593-5aa9-4ce9-aaef-10705caa2e4e,Namespace:kube-system,Attempt:0,}" May 17 00:51:47.314495 kubelet[1863]: E0517 00:51:47.314450 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:47.753735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226664538.mount: Deactivated successfully. May 17 00:51:47.787377 env[1446]: time="2025-05-17T00:51:47.787337701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.797826 env[1446]: time="2025-05-17T00:51:47.797787736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.801191 env[1446]: time="2025-05-17T00:51:47.801156681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.813850 env[1446]: time="2025-05-17T00:51:47.813812093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.817629 env[1446]: time="2025-05-17T00:51:47.817597600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.819709 env[1446]: time="2025-05-17T00:51:47.819671536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.823745 env[1446]: time="2025-05-17T00:51:47.823713405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.833691 env[1446]: time="2025-05-17T00:51:47.833657397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:47.878030 env[1446]: time="2025-05-17T00:51:47.877947879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:51:47.878214 env[1446]: time="2025-05-17T00:51:47.878006480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:51:47.878214 env[1446]: time="2025-05-17T00:51:47.878017440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:51:47.878382 env[1446]: time="2025-05-17T00:51:47.878347202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1 pid=1916 runtime=io.containerd.runc.v2 May 17 00:51:47.896112 env[1446]: time="2025-05-17T00:51:47.895894610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:51:47.896112 env[1446]: time="2025-05-17T00:51:47.895937730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:51:47.896112 env[1446]: time="2025-05-17T00:51:47.895948210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:51:47.896398 env[1446]: time="2025-05-17T00:51:47.896358813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cda6ab8c49bf9e5efbc30fcf4cf78792cd51e8c27306aaa5c58c912e92e62711 pid=1940 runtime=io.containerd.runc.v2 May 17 00:51:47.900819 systemd[1]: Started cri-containerd-6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1.scope. May 17 00:51:47.917547 systemd[1]: Started cri-containerd-cda6ab8c49bf9e5efbc30fcf4cf78792cd51e8c27306aaa5c58c912e92e62711.scope. May 17 00:51:47.931478 env[1446]: time="2025-05-17T00:51:47.931412788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26kvm,Uid:a85e6f78-a465-42ee-b60e-df931819be2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\"" May 17 00:51:47.935584 env[1446]: time="2025-05-17T00:51:47.935545418Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:51:47.952431 env[1446]: time="2025-05-17T00:51:47.952382340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfzcj,Uid:db14e593-5aa9-4ce9-aaef-10705caa2e4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cda6ab8c49bf9e5efbc30fcf4cf78792cd51e8c27306aaa5c58c912e92e62711\"" May 17 00:51:48.314804 kubelet[1863]: E0517 00:51:48.314760 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:48.736387 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 17 00:51:49.315110 kubelet[1863]: E0517 00:51:49.315071 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:50.315789 kubelet[1863]: E0517 00:51:50.315749 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:51.316538 kubelet[1863]: E0517 00:51:51.316487 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:52.316746 kubelet[1863]: E0517 00:51:52.316703 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:53.245413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940191480.mount: Deactivated successfully. May 17 00:51:53.317743 kubelet[1863]: E0517 00:51:53.317699 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:54.318552 kubelet[1863]: E0517 00:51:54.318510 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:55.173528 update_engine[1439]: I0517 00:51:55.173135 1439 update_attempter.cc:509] Updating boot flags... May 17 00:51:55.318901 kubelet[1863]: E0517 00:51:55.318856 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:55.504654 env[1446]: time="2025-05-17T00:51:55.504321432Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:55.514951 env[1446]: time="2025-05-17T00:51:55.514112114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:55.523414 env[1446]: time="2025-05-17T00:51:55.523375594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:55.523904 env[1446]: time="2025-05-17T00:51:55.523874716Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:51:55.525985 env[1446]: time="2025-05-17T00:51:55.525946685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:51:55.530970 env[1446]: time="2025-05-17T00:51:55.530930267Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:51:55.569230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276724579.mount: Deactivated successfully. May 17 00:51:55.574312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059851360.mount: Deactivated successfully. May 17 00:51:55.595898 env[1446]: time="2025-05-17T00:51:55.595843349Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\"" May 17 00:51:55.596819 env[1446]: time="2025-05-17T00:51:55.596785473Z" level=info msg="StartContainer for \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\"" May 17 00:51:55.616281 systemd[1]: Started cri-containerd-292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507.scope. May 17 00:51:55.645433 env[1446]: time="2025-05-17T00:51:55.645383123Z" level=info msg="StartContainer for \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\" returns successfully" May 17 00:51:55.655629 systemd[1]: cri-containerd-292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507.scope: Deactivated successfully. May 17 00:51:56.319338 kubelet[1863]: E0517 00:51:56.319289 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:56.566121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507-rootfs.mount: Deactivated successfully. May 17 00:51:57.319862 kubelet[1863]: E0517 00:51:57.319820 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:57.390689 env[1446]: time="2025-05-17T00:51:57.390646383Z" level=info msg="shim disconnected" id=292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507 May 17 00:51:57.391042 env[1446]: time="2025-05-17T00:51:57.391021624Z" level=warning msg="cleaning up after shim disconnected" id=292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507 namespace=k8s.io May 17 00:51:57.391131 env[1446]: time="2025-05-17T00:51:57.391116985Z" level=info msg="cleaning up dead shim" May 17 00:51:57.397971 env[1446]: time="2025-05-17T00:51:57.397937331Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:51:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2082 runtime=io.containerd.runc.v2\n" May 17 00:51:57.457159 env[1446]: time="2025-05-17T00:51:57.457119796Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:51:57.491214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233495731.mount: Deactivated successfully. May 17 00:51:57.496456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692595381.mount: Deactivated successfully. May 17 00:51:57.518885 env[1446]: time="2025-05-17T00:51:57.518835031Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\"" May 17 00:51:57.519685 env[1446]: time="2025-05-17T00:51:57.519586554Z" level=info msg="StartContainer for \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\"" May 17 00:51:57.535979 systemd[1]: Started cri-containerd-3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15.scope. May 17 00:51:57.571414 env[1446]: time="2025-05-17T00:51:57.571325951Z" level=info msg="StartContainer for \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\" returns successfully" May 17 00:51:57.573915 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:51:57.574460 systemd[1]: Stopped systemd-sysctl.service. May 17 00:51:57.574766 systemd[1]: Stopping systemd-sysctl.service... May 17 00:51:57.577716 systemd[1]: Starting systemd-sysctl.service... May 17 00:51:57.583552 systemd[1]: cri-containerd-3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15.scope: Deactivated successfully. May 17 00:51:57.588444 systemd[1]: Finished systemd-sysctl.service. May 17 00:51:57.603681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15-rootfs.mount: Deactivated successfully. May 17 00:51:57.621641 env[1446]: time="2025-05-17T00:51:57.621590783Z" level=info msg="shim disconnected" id=3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15 May 17 00:51:57.621641 env[1446]: time="2025-05-17T00:51:57.621638463Z" level=warning msg="cleaning up after shim disconnected" id=3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15 namespace=k8s.io May 17 00:51:57.621811 env[1446]: time="2025-05-17T00:51:57.621648383Z" level=info msg="cleaning up dead shim" May 17 00:51:57.628526 env[1446]: time="2025-05-17T00:51:57.628478689Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:51:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2148 runtime=io.containerd.runc.v2\n" May 17 00:51:58.319985 kubelet[1863]: E0517 00:51:58.319936 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:58.416158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount492215313.mount: Deactivated successfully. May 17 00:51:58.463631 env[1446]: time="2025-05-17T00:51:58.463585323Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:51:58.524605 env[1446]: time="2025-05-17T00:51:58.524537180Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\"" May 17 00:51:58.525472 env[1446]: time="2025-05-17T00:51:58.525445784Z" level=info msg="StartContainer for \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\"" May 17 00:51:58.551206 systemd[1]: Started cri-containerd-f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0.scope. May 17 00:51:58.586558 systemd[1]: cri-containerd-f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0.scope: Deactivated successfully. May 17 00:51:58.588788 env[1446]: time="2025-05-17T00:51:58.588739570Z" level=info msg="StartContainer for \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\" returns successfully" May 17 00:51:58.622294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0-rootfs.mount: Deactivated successfully. May 17 00:51:58.989850 env[1446]: time="2025-05-17T00:51:58.989807283Z" level=info msg="shim disconnected" id=f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0 May 17 00:51:58.990147 env[1446]: time="2025-05-17T00:51:58.990127844Z" level=warning msg="cleaning up after shim disconnected" id=f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0 namespace=k8s.io May 17 00:51:58.990246 env[1446]: time="2025-05-17T00:51:58.990231485Z" level=info msg="cleaning up dead shim" May 17 00:51:58.996618 env[1446]: time="2025-05-17T00:51:58.996587307Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:51:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2206 runtime=io.containerd.runc.v2\n" May 17 00:51:59.041169 env[1446]: time="2025-05-17T00:51:59.041124738Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:59.051689 env[1446]: time="2025-05-17T00:51:59.051648853Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:59.058162 env[1446]: time="2025-05-17T00:51:59.058123714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:59.063613 env[1446]: time="2025-05-17T00:51:59.063576973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:51:59.064170 env[1446]: time="2025-05-17T00:51:59.064136095Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 17 00:51:59.071895 env[1446]: time="2025-05-17T00:51:59.071859840Z" level=info msg="CreateContainer within sandbox \"cda6ab8c49bf9e5efbc30fcf4cf78792cd51e8c27306aaa5c58c912e92e62711\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:51:59.108337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240665809.mount: Deactivated successfully. May 17 00:51:59.132116 env[1446]: time="2025-05-17T00:51:59.132058562Z" level=info msg="CreateContainer within sandbox \"cda6ab8c49bf9e5efbc30fcf4cf78792cd51e8c27306aaa5c58c912e92e62711\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d129289030c32f7b7e5682d8ec448f153f11500de120d607c05a413e5a43e832\"" May 17 00:51:59.133029 env[1446]: time="2025-05-17T00:51:59.133004525Z" level=info msg="StartContainer for \"d129289030c32f7b7e5682d8ec448f153f11500de120d607c05a413e5a43e832\"" May 17 00:51:59.147695 systemd[1]: Started cri-containerd-d129289030c32f7b7e5682d8ec448f153f11500de120d607c05a413e5a43e832.scope. May 17 00:51:59.181215 env[1446]: time="2025-05-17T00:51:59.181136927Z" level=info msg="StartContainer for \"d129289030c32f7b7e5682d8ec448f153f11500de120d607c05a413e5a43e832\" returns successfully" May 17 00:51:59.321097 kubelet[1863]: E0517 00:51:59.320998 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:51:59.466513 env[1446]: time="2025-05-17T00:51:59.466472243Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:51:59.495493 kubelet[1863]: I0517 00:51:59.495420 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zfzcj" podStartSLOduration=4.384085712 podStartE2EDuration="15.495395299s" podCreationTimestamp="2025-05-17 00:51:44 +0000 UTC" firstStartedPulling="2025-05-17 00:51:47.953885991 +0000 UTC m=+5.111098738" lastFinishedPulling="2025-05-17 00:51:59.065195578 +0000 UTC m=+16.222408325" observedRunningTime="2025-05-17 00:51:59.473035625 +0000 UTC m=+16.630248332" watchObservedRunningTime="2025-05-17 00:51:59.495395299 +0000 UTC m=+16.652608006" May 17 00:51:59.518133 env[1446]: time="2025-05-17T00:51:59.518071575Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\"" May 17 00:51:59.519068 env[1446]: time="2025-05-17T00:51:59.519041859Z" level=info msg="StartContainer for \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\"" May 17 00:51:59.534819 systemd[1]: Started cri-containerd-a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f.scope. May 17 00:51:59.567120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993750843.mount: Deactivated successfully. May 17 00:51:59.568522 systemd[1]: cri-containerd-a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f.scope: Deactivated successfully. May 17 00:51:59.572396 env[1446]: time="2025-05-17T00:51:59.572307117Z" level=info msg="StartContainer for \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\" returns successfully" May 17 00:51:59.589644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f-rootfs.mount: Deactivated successfully. May 17 00:51:59.695421 env[1446]: time="2025-05-17T00:51:59.695375129Z" level=info msg="shim disconnected" id=a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f May 17 00:51:59.695706 env[1446]: time="2025-05-17T00:51:59.695685771Z" level=warning msg="cleaning up after shim disconnected" id=a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f namespace=k8s.io May 17 00:51:59.695803 env[1446]: time="2025-05-17T00:51:59.695787851Z" level=info msg="cleaning up dead shim" May 17 00:51:59.704829 env[1446]: time="2025-05-17T00:51:59.704790041Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:51:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2381 runtime=io.containerd.runc.v2\n" May 17 00:52:00.322005 kubelet[1863]: E0517 00:52:00.321955 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:00.470647 env[1446]: time="2025-05-17T00:52:00.470610909Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:52:00.497243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279004522.mount: Deactivated successfully. May 17 00:52:00.521774 env[1446]: time="2025-05-17T00:52:00.521730070Z" level=info msg="CreateContainer within sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\"" May 17 00:52:00.522590 env[1446]: time="2025-05-17T00:52:00.522559872Z" level=info msg="StartContainer for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\"" May 17 00:52:00.536294 systemd[1]: Started cri-containerd-4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7.scope. May 17 00:52:00.566662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593899719.mount: Deactivated successfully. May 17 00:52:00.576263 env[1446]: time="2025-05-17T00:52:00.576172801Z" level=info msg="StartContainer for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" returns successfully" May 17 00:52:00.599150 systemd[1]: run-containerd-runc-k8s.io-4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7-runc.UJDbA4.mount: Deactivated successfully. May 17 00:52:00.658124 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:52:00.716796 kubelet[1863]: I0517 00:52:00.715874 1863 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:52:01.231117 kernel: Initializing XFRM netlink socket May 17 00:52:01.239110 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:52:01.322964 kubelet[1863]: E0517 00:52:01.322926 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:01.488116 kubelet[1863]: I0517 00:52:01.487727 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-26kvm" podStartSLOduration=9.897429021 podStartE2EDuration="17.48771049s" podCreationTimestamp="2025-05-17 00:51:44 +0000 UTC" firstStartedPulling="2025-05-17 00:51:47.934860293 +0000 UTC m=+5.092073040" lastFinishedPulling="2025-05-17 00:51:55.525141762 +0000 UTC m=+12.682354509" observedRunningTime="2025-05-17 00:52:01.486816008 +0000 UTC m=+18.644028835" watchObservedRunningTime="2025-05-17 00:52:01.48771049 +0000 UTC m=+18.644923237" May 17 00:52:02.323692 kubelet[1863]: E0517 00:52:02.323655 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:02.891825 systemd-networkd[1608]: cilium_host: Link UP May 17 00:52:02.891927 systemd-networkd[1608]: cilium_net: Link UP May 17 00:52:02.891930 systemd-networkd[1608]: cilium_net: Gained carrier May 17 00:52:02.892036 systemd-networkd[1608]: cilium_host: Gained carrier May 17 00:52:02.899124 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:52:02.899218 systemd-networkd[1608]: cilium_host: Gained IPv6LL May 17 00:52:03.060755 systemd-networkd[1608]: cilium_vxlan: Link UP May 17 00:52:03.060762 systemd-networkd[1608]: cilium_vxlan: Gained carrier May 17 00:52:03.324267 kubelet[1863]: E0517 00:52:03.324149 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:03.359120 kernel: NET: Registered PF_ALG protocol family May 17 00:52:03.541263 systemd-networkd[1608]: cilium_net: Gained IPv6LL May 17 00:52:04.066704 systemd-networkd[1608]: lxc_health: Link UP May 17 00:52:04.082127 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:52:04.082237 systemd-networkd[1608]: lxc_health: Gained carrier May 17 00:52:04.117275 systemd-networkd[1608]: cilium_vxlan: Gained IPv6LL May 17 00:52:04.309063 kubelet[1863]: E0517 00:52:04.309020 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:04.324624 kubelet[1863]: E0517 00:52:04.324513 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:04.918682 systemd[1]: Created slice kubepods-besteffort-pod3dba939e_cfa0_4e06_ab46_282d147274ea.slice. May 17 00:52:04.976443 kubelet[1863]: I0517 00:52:04.976408 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljx6g\" (UniqueName: \"kubernetes.io/projected/3dba939e-cfa0-4e06-ab46-282d147274ea-kube-api-access-ljx6g\") pod \"nginx-deployment-7fcdb87857-xmjk7\" (UID: \"3dba939e-cfa0-4e06-ab46-282d147274ea\") " pod="default/nginx-deployment-7fcdb87857-xmjk7" May 17 00:52:05.222187 env[1446]: time="2025-05-17T00:52:05.222020762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-xmjk7,Uid:3dba939e-cfa0-4e06-ab46-282d147274ea,Namespace:default,Attempt:0,}" May 17 00:52:05.298215 systemd-networkd[1608]: lxc50ada37583b3: Link UP May 17 00:52:05.305173 kernel: eth0: renamed from tmpf738d May 17 00:52:05.316880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:52:05.317026 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc50ada37583b3: link becomes ready May 17 00:52:05.320238 systemd-networkd[1608]: lxc50ada37583b3: Gained carrier May 17 00:52:05.325665 kubelet[1863]: E0517 00:52:05.325637 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:05.845225 systemd-networkd[1608]: lxc_health: Gained IPv6LL May 17 00:52:06.326175 kubelet[1863]: E0517 00:52:06.326133 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:06.677393 systemd-networkd[1608]: lxc50ada37583b3: Gained IPv6LL May 17 00:52:06.962786 kubelet[1863]: I0517 00:52:06.962678 1863 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:52:07.327238 kubelet[1863]: E0517 00:52:07.327141 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:08.327895 kubelet[1863]: E0517 00:52:08.327859 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:08.502177 env[1446]: time="2025-05-17T00:52:08.502065681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:08.502177 env[1446]: time="2025-05-17T00:52:08.502140519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:08.502177 env[1446]: time="2025-05-17T00:52:08.502150798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:08.502734 env[1446]: time="2025-05-17T00:52:08.502682658Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f738da040a5142929bdedf575b72a15d51a905cb4f7e1e92f6965fbffab4d705 pid=2958 runtime=io.containerd.runc.v2 May 17 00:52:08.516054 systemd[1]: Started cri-containerd-f738da040a5142929bdedf575b72a15d51a905cb4f7e1e92f6965fbffab4d705.scope. May 17 00:52:08.549144 env[1446]: time="2025-05-17T00:52:08.547739110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-xmjk7,Uid:3dba939e-cfa0-4e06-ab46-282d147274ea,Namespace:default,Attempt:0,} returns sandbox id \"f738da040a5142929bdedf575b72a15d51a905cb4f7e1e92f6965fbffab4d705\"" May 17 00:52:08.549282 env[1446]: time="2025-05-17T00:52:08.549211254Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:52:09.328147 kubelet[1863]: E0517 00:52:09.328108 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:10.328972 kubelet[1863]: E0517 00:52:10.328919 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:11.023577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1224987421.mount: Deactivated successfully. May 17 00:52:11.329247 kubelet[1863]: E0517 00:52:11.329142 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:12.330186 kubelet[1863]: E0517 00:52:12.330138 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:12.347118 env[1446]: time="2025-05-17T00:52:12.347064797Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:12.355839 env[1446]: time="2025-05-17T00:52:12.355809100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:12.366996 env[1446]: time="2025-05-17T00:52:12.366970162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:12.375189 env[1446]: time="2025-05-17T00:52:12.375140125Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:12.376179 env[1446]: time="2025-05-17T00:52:12.376146211Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 17 00:52:12.383752 env[1446]: time="2025-05-17T00:52:12.383715834Z" level=info msg="CreateContainer within sandbox \"f738da040a5142929bdedf575b72a15d51a905cb4f7e1e92f6965fbffab4d705\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:52:12.426520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1458406774.mount: Deactivated successfully. May 17 00:52:12.431182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000806999.mount: Deactivated successfully. May 17 00:52:12.446496 env[1446]: time="2025-05-17T00:52:12.446419549Z" level=info msg="CreateContainer within sandbox \"f738da040a5142929bdedf575b72a15d51a905cb4f7e1e92f6965fbffab4d705\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cbf6a34a4d40b69941a0b2985335c5b2f42f4797538c38c8f655ba1386d15088\"" May 17 00:52:12.447262 env[1446]: time="2025-05-17T00:52:12.447228362Z" level=info msg="StartContainer for \"cbf6a34a4d40b69941a0b2985335c5b2f42f4797538c38c8f655ba1386d15088\"" May 17 00:52:12.465322 systemd[1]: Started cri-containerd-cbf6a34a4d40b69941a0b2985335c5b2f42f4797538c38c8f655ba1386d15088.scope. May 17 00:52:12.496433 env[1446]: time="2025-05-17T00:52:12.496393295Z" level=info msg="StartContainer for \"cbf6a34a4d40b69941a0b2985335c5b2f42f4797538c38c8f655ba1386d15088\" returns successfully" May 17 00:52:13.330828 kubelet[1863]: E0517 00:52:13.330789 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:13.503650 kubelet[1863]: I0517 00:52:13.503592 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-xmjk7" podStartSLOduration=5.674515254 podStartE2EDuration="9.503578057s" podCreationTimestamp="2025-05-17 00:52:04 +0000 UTC" firstStartedPulling="2025-05-17 00:52:08.548595637 +0000 UTC m=+25.705808344" lastFinishedPulling="2025-05-17 00:52:12.37765844 +0000 UTC m=+29.534871147" observedRunningTime="2025-05-17 00:52:13.49892701 +0000 UTC m=+30.656139757" watchObservedRunningTime="2025-05-17 00:52:13.503578057 +0000 UTC m=+30.660790804" May 17 00:52:14.331036 kubelet[1863]: E0517 00:52:14.330998 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:15.331686 kubelet[1863]: E0517 00:52:15.331650 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:16.332008 kubelet[1863]: E0517 00:52:16.331958 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:17.332321 kubelet[1863]: E0517 00:52:17.332282 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:18.332864 kubelet[1863]: E0517 00:52:18.332826 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:19.333696 kubelet[1863]: E0517 00:52:19.333652 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:20.334164 kubelet[1863]: E0517 00:52:20.334128 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:20.589358 systemd[1]: Created slice kubepods-besteffort-pod4a9f5e55_1c1e_4f4f_8568_52c135b98a6d.slice. May 17 00:52:20.653357 kubelet[1863]: I0517 00:52:20.653303 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8w26\" (UniqueName: \"kubernetes.io/projected/4a9f5e55-1c1e-4f4f-8568-52c135b98a6d-kube-api-access-l8w26\") pod \"nfs-server-provisioner-0\" (UID: \"4a9f5e55-1c1e-4f4f-8568-52c135b98a6d\") " pod="default/nfs-server-provisioner-0" May 17 00:52:20.653643 kubelet[1863]: I0517 00:52:20.653584 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4a9f5e55-1c1e-4f4f-8568-52c135b98a6d-data\") pod \"nfs-server-provisioner-0\" (UID: \"4a9f5e55-1c1e-4f4f-8568-52c135b98a6d\") " pod="default/nfs-server-provisioner-0" May 17 00:52:20.892784 env[1446]: time="2025-05-17T00:52:20.892735373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4a9f5e55-1c1e-4f4f-8568-52c135b98a6d,Namespace:default,Attempt:0,}" May 17 00:52:20.959467 systemd-networkd[1608]: lxc246751a40253: Link UP May 17 00:52:20.967337 kernel: eth0: renamed from tmp6a52b May 17 00:52:20.980297 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:52:20.980413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc246751a40253: link becomes ready May 17 00:52:20.980690 systemd-networkd[1608]: lxc246751a40253: Gained carrier May 17 00:52:21.124923 env[1446]: time="2025-05-17T00:52:21.124843740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:21.124923 env[1446]: time="2025-05-17T00:52:21.124889219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:21.124923 env[1446]: time="2025-05-17T00:52:21.124898779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:21.125407 env[1446]: time="2025-05-17T00:52:21.125350927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a52b79664d40b8d654b039753fd83314e2ec9b50327aef7e77b40d028c5345e pid=3085 runtime=io.containerd.runc.v2 May 17 00:52:21.138106 systemd[1]: Started cri-containerd-6a52b79664d40b8d654b039753fd83314e2ec9b50327aef7e77b40d028c5345e.scope. May 17 00:52:21.171459 env[1446]: time="2025-05-17T00:52:21.171416905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4a9f5e55-1c1e-4f4f-8568-52c135b98a6d,Namespace:default,Attempt:0,} returns sandbox id \"6a52b79664d40b8d654b039753fd83314e2ec9b50327aef7e77b40d028c5345e\"" May 17 00:52:21.174359 env[1446]: time="2025-05-17T00:52:21.174332668Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:52:21.335185 kubelet[1863]: E0517 00:52:21.335143 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:22.336228 kubelet[1863]: E0517 00:52:22.336177 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:22.869277 systemd-networkd[1608]: lxc246751a40253: Gained IPv6LL May 17 00:52:23.337044 kubelet[1863]: E0517 00:52:23.336997 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:23.555545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200714848.mount: Deactivated successfully. May 17 00:52:24.309058 kubelet[1863]: E0517 00:52:24.309010 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:24.337795 kubelet[1863]: E0517 00:52:24.337746 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:25.338785 kubelet[1863]: E0517 00:52:25.338739 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:25.636997 env[1446]: time="2025-05-17T00:52:25.636954185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:25.646626 env[1446]: time="2025-05-17T00:52:25.646593595Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:25.652810 env[1446]: time="2025-05-17T00:52:25.652781807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:25.657015 env[1446]: time="2025-05-17T00:52:25.656986027Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:25.657744 env[1446]: time="2025-05-17T00:52:25.657712609Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 17 00:52:25.664005 env[1446]: time="2025-05-17T00:52:25.663971740Z" level=info msg="CreateContainer within sandbox \"6a52b79664d40b8d654b039753fd83314e2ec9b50327aef7e77b40d028c5345e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:52:25.694322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42537799.mount: Deactivated successfully. May 17 00:52:25.699648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134190123.mount: Deactivated successfully. May 17 00:52:25.715926 env[1446]: time="2025-05-17T00:52:25.715859902Z" level=info msg="CreateContainer within sandbox \"6a52b79664d40b8d654b039753fd83314e2ec9b50327aef7e77b40d028c5345e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"cc3cfdc784bdd36d6961c3775724b29c2ab9feec1466c4e1a8effa1fe8dd88be\"" May 17 00:52:25.716582 env[1446]: time="2025-05-17T00:52:25.716536126Z" level=info msg="StartContainer for \"cc3cfdc784bdd36d6961c3775724b29c2ab9feec1466c4e1a8effa1fe8dd88be\"" May 17 00:52:25.738649 systemd[1]: Started cri-containerd-cc3cfdc784bdd36d6961c3775724b29c2ab9feec1466c4e1a8effa1fe8dd88be.scope. May 17 00:52:25.769558 env[1446]: time="2025-05-17T00:52:25.769504622Z" level=info msg="StartContainer for \"cc3cfdc784bdd36d6961c3775724b29c2ab9feec1466c4e1a8effa1fe8dd88be\" returns successfully" May 17 00:52:26.339433 kubelet[1863]: E0517 00:52:26.339390 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:27.340530 kubelet[1863]: E0517 00:52:27.340491 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:28.341469 kubelet[1863]: E0517 00:52:28.341432 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:29.342666 kubelet[1863]: E0517 00:52:29.342580 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:30.342857 kubelet[1863]: E0517 00:52:30.342826 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:31.036565 kubelet[1863]: I0517 00:52:31.036500 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.551303927 podStartE2EDuration="11.036481703s" podCreationTimestamp="2025-05-17 00:52:20 +0000 UTC" firstStartedPulling="2025-05-17 00:52:21.173783963 +0000 UTC m=+38.330996710" lastFinishedPulling="2025-05-17 00:52:25.658961739 +0000 UTC m=+42.816174486" observedRunningTime="2025-05-17 00:52:26.521319003 +0000 UTC m=+43.678531750" watchObservedRunningTime="2025-05-17 00:52:31.036481703 +0000 UTC m=+48.193694450" May 17 00:52:31.046870 systemd[1]: Created slice kubepods-besteffort-podaecf935d_580f_4116_89e8_3aec6d999668.slice. May 17 00:52:31.107435 kubelet[1863]: I0517 00:52:31.107387 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8889631e-a89d-4405-a1a4-eb14df5a3258\" (UniqueName: \"kubernetes.io/nfs/aecf935d-580f-4116-89e8-3aec6d999668-pvc-8889631e-a89d-4405-a1a4-eb14df5a3258\") pod \"test-pod-1\" (UID: \"aecf935d-580f-4116-89e8-3aec6d999668\") " pod="default/test-pod-1" May 17 00:52:31.107435 kubelet[1863]: I0517 00:52:31.107436 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh2nb\" (UniqueName: \"kubernetes.io/projected/aecf935d-580f-4116-89e8-3aec6d999668-kube-api-access-kh2nb\") pod \"test-pod-1\" (UID: \"aecf935d-580f-4116-89e8-3aec6d999668\") " pod="default/test-pod-1" May 17 00:52:31.328108 kernel: FS-Cache: Loaded May 17 00:52:31.343608 kubelet[1863]: E0517 00:52:31.343568 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:31.428072 kernel: RPC: Registered named UNIX socket transport module. May 17 00:52:31.428205 kernel: RPC: Registered udp transport module. May 17 00:52:31.428233 kernel: RPC: Registered tcp transport module. May 17 00:52:31.436212 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:52:31.550111 kernel: FS-Cache: Netfs 'nfs' registered for caching May 17 00:52:31.695262 kernel: NFS: Registering the id_resolver key type May 17 00:52:31.695406 kernel: Key type id_resolver registered May 17 00:52:31.695432 kernel: Key type id_legacy registered May 17 00:52:31.987629 nfsidmap[3202]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-n-44db7a48ea' May 17 00:52:32.021258 nfsidmap[3203]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.7-n-44db7a48ea' May 17 00:52:32.250321 env[1446]: time="2025-05-17T00:52:32.250219409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aecf935d-580f-4116-89e8-3aec6d999668,Namespace:default,Attempt:0,}" May 17 00:52:32.307658 systemd-networkd[1608]: lxc5e8ab021c99a: Link UP May 17 00:52:32.320199 kernel: eth0: renamed from tmpf0c3b May 17 00:52:32.336107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:52:32.336255 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5e8ab021c99a: link becomes ready May 17 00:52:32.336396 systemd-networkd[1608]: lxc5e8ab021c99a: Gained carrier May 17 00:52:32.344869 kubelet[1863]: E0517 00:52:32.344807 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:32.508756 env[1446]: time="2025-05-17T00:52:32.508607377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:32.508756 env[1446]: time="2025-05-17T00:52:32.508649136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:32.508756 env[1446]: time="2025-05-17T00:52:32.508660096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:32.509316 env[1446]: time="2025-05-17T00:52:32.509254684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0c3b0d72c1f7ce595555845fa529753b3c4089232551ad02876f9060be8cb13 pid=3229 runtime=io.containerd.runc.v2 May 17 00:52:32.528527 systemd[1]: run-containerd-runc-k8s.io-f0c3b0d72c1f7ce595555845fa529753b3c4089232551ad02876f9060be8cb13-runc.XJCF7o.mount: Deactivated successfully. May 17 00:52:32.530010 systemd[1]: Started cri-containerd-f0c3b0d72c1f7ce595555845fa529753b3c4089232551ad02876f9060be8cb13.scope. May 17 00:52:32.558734 env[1446]: time="2025-05-17T00:52:32.558692298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aecf935d-580f-4116-89e8-3aec6d999668,Namespace:default,Attempt:0,} returns sandbox id \"f0c3b0d72c1f7ce595555845fa529753b3c4089232551ad02876f9060be8cb13\"" May 17 00:52:32.559943 env[1446]: time="2025-05-17T00:52:32.559750197Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:52:32.868867 env[1446]: time="2025-05-17T00:52:32.868759075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:32.878288 env[1446]: time="2025-05-17T00:52:32.878246286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:32.883237 env[1446]: time="2025-05-17T00:52:32.883193947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:32.890967 env[1446]: time="2025-05-17T00:52:32.890937832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:32.891592 env[1446]: time="2025-05-17T00:52:32.891561540Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 17 00:52:32.900490 env[1446]: time="2025-05-17T00:52:32.900454563Z" level=info msg="CreateContainer within sandbox \"f0c3b0d72c1f7ce595555845fa529753b3c4089232551ad02876f9060be8cb13\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:52:32.956932 env[1446]: time="2025-05-17T00:52:32.956851198Z" level=info msg="CreateContainer within sandbox \"f0c3b0d72c1f7ce595555845fa529753b3c4089232551ad02876f9060be8cb13\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"23ddb0fc739fcb9ff7c4a4e612a90f08c234bc994c09683215db30de362e830f\"" May 17 00:52:32.957639 env[1446]: time="2025-05-17T00:52:32.957617143Z" level=info msg="StartContainer for \"23ddb0fc739fcb9ff7c4a4e612a90f08c234bc994c09683215db30de362e830f\"" May 17 00:52:32.971619 systemd[1]: Started cri-containerd-23ddb0fc739fcb9ff7c4a4e612a90f08c234bc994c09683215db30de362e830f.scope. May 17 00:52:33.001386 env[1446]: time="2025-05-17T00:52:33.001334031Z" level=info msg="StartContainer for \"23ddb0fc739fcb9ff7c4a4e612a90f08c234bc994c09683215db30de362e830f\" returns successfully" May 17 00:52:33.344967 kubelet[1863]: E0517 00:52:33.344919 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:33.531831 kubelet[1863]: I0517 00:52:33.531779 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.198144166 podStartE2EDuration="12.531755714s" podCreationTimestamp="2025-05-17 00:52:21 +0000 UTC" firstStartedPulling="2025-05-17 00:52:32.559518761 +0000 UTC m=+49.716731508" lastFinishedPulling="2025-05-17 00:52:32.893130309 +0000 UTC m=+50.050343056" observedRunningTime="2025-05-17 00:52:33.531586318 +0000 UTC m=+50.688799025" watchObservedRunningTime="2025-05-17 00:52:33.531755714 +0000 UTC m=+50.688968421" May 17 00:52:33.941239 systemd-networkd[1608]: lxc5e8ab021c99a: Gained IPv6LL May 17 00:52:34.345747 kubelet[1863]: E0517 00:52:34.345630 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:35.347316 kubelet[1863]: E0517 00:52:35.347265 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:36.348267 kubelet[1863]: E0517 00:52:36.348235 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:37.349493 kubelet[1863]: E0517 00:52:37.349456 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:38.118550 systemd[1]: run-containerd-runc-k8s.io-4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7-runc.BpxhMA.mount: Deactivated successfully. May 17 00:52:38.131480 env[1446]: time="2025-05-17T00:52:38.131416510Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:52:38.138876 env[1446]: time="2025-05-17T00:52:38.138837902Z" level=info msg="StopContainer for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" with timeout 2 (s)" May 17 00:52:38.139143 env[1446]: time="2025-05-17T00:52:38.139119377Z" level=info msg="Stop container \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" with signal terminated" May 17 00:52:38.145441 systemd-networkd[1608]: lxc_health: Link DOWN May 17 00:52:38.145467 systemd-networkd[1608]: lxc_health: Lost carrier May 17 00:52:38.175421 systemd[1]: cri-containerd-4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7.scope: Deactivated successfully. May 17 00:52:38.175735 systemd[1]: cri-containerd-4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7.scope: Consumed 5.896s CPU time. May 17 00:52:38.191163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7-rootfs.mount: Deactivated successfully. May 17 00:52:38.350182 kubelet[1863]: E0517 00:52:38.350131 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:39.099447 env[1446]: time="2025-05-17T00:52:39.099393727Z" level=info msg="shim disconnected" id=4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7 May 17 00:52:39.099447 env[1446]: time="2025-05-17T00:52:39.099443806Z" level=warning msg="cleaning up after shim disconnected" id=4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7 namespace=k8s.io May 17 00:52:39.099447 env[1446]: time="2025-05-17T00:52:39.099453686Z" level=info msg="cleaning up dead shim" May 17 00:52:39.105731 env[1446]: time="2025-05-17T00:52:39.105686461Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3363 runtime=io.containerd.runc.v2\n" May 17 00:52:39.111405 env[1446]: time="2025-05-17T00:52:39.111367726Z" level=info msg="StopContainer for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" returns successfully" May 17 00:52:39.112057 env[1446]: time="2025-05-17T00:52:39.112033955Z" level=info msg="StopPodSandbox for \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\"" May 17 00:52:39.112231 env[1446]: time="2025-05-17T00:52:39.112209272Z" level=info msg="Container to stop \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:39.112299 env[1446]: time="2025-05-17T00:52:39.112283710Z" level=info msg="Container to stop \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:39.112365 env[1446]: time="2025-05-17T00:52:39.112350989Z" level=info msg="Container to stop \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:39.112423 env[1446]: time="2025-05-17T00:52:39.112408268Z" level=info msg="Container to stop \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:39.112481 env[1446]: time="2025-05-17T00:52:39.112464387Z" level=info msg="Container to stop \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:39.114182 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1-shm.mount: Deactivated successfully. May 17 00:52:39.119439 systemd[1]: cri-containerd-6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1.scope: Deactivated successfully. May 17 00:52:39.137146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1-rootfs.mount: Deactivated successfully. May 17 00:52:39.154372 env[1446]: time="2025-05-17T00:52:39.154320524Z" level=info msg="shim disconnected" id=6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1 May 17 00:52:39.154372 env[1446]: time="2025-05-17T00:52:39.154368123Z" level=warning msg="cleaning up after shim disconnected" id=6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1 namespace=k8s.io May 17 00:52:39.154372 env[1446]: time="2025-05-17T00:52:39.154378483Z" level=info msg="cleaning up dead shim" May 17 00:52:39.161325 env[1446]: time="2025-05-17T00:52:39.161278927Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3394 runtime=io.containerd.runc.v2\n" May 17 00:52:39.161593 env[1446]: time="2025-05-17T00:52:39.161568202Z" level=info msg="TearDown network for sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" successfully" May 17 00:52:39.161632 env[1446]: time="2025-05-17T00:52:39.161593042Z" level=info msg="StopPodSandbox for \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" returns successfully" May 17 00:52:39.248494 kubelet[1863]: I0517 00:52:39.248448 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-hostproc" (OuterVolumeSpecName: "hostproc") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.248712 kubelet[1863]: I0517 00:52:39.248368 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-hostproc\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.248817 kubelet[1863]: I0517 00:52:39.248803 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-lib-modules\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.248908 kubelet[1863]: I0517 00:52:39.248882 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.248992 kubelet[1863]: I0517 00:52:39.248980 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-bpf-maps\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249100 kubelet[1863]: I0517 00:52:39.249071 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-net\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249198 kubelet[1863]: I0517 00:52:39.249187 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-run\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249299 kubelet[1863]: I0517 00:52:39.249286 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-hubble-tls\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249389 kubelet[1863]: I0517 00:52:39.249375 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9csq4\" (UniqueName: \"kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-kube-api-access-9csq4\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249468 kubelet[1863]: I0517 00:52:39.249457 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-cgroup\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249546 kubelet[1863]: I0517 00:52:39.249535 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cni-path\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249692 kubelet[1863]: I0517 00:52:39.249678 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-config-path\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249792 kubelet[1863]: I0517 00:52:39.249775 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-xtables-lock\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249887 kubelet[1863]: I0517 00:52:39.249875 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-kernel\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.249975 kubelet[1863]: I0517 00:52:39.249963 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a85e6f78-a465-42ee-b60e-df931819be2c-clustermesh-secrets\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.250069 kubelet[1863]: I0517 00:52:39.250054 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-etc-cni-netd\") pod \"a85e6f78-a465-42ee-b60e-df931819be2c\" (UID: \"a85e6f78-a465-42ee-b60e-df931819be2c\") " May 17 00:52:39.250185 kubelet[1863]: I0517 00:52:39.250173 1863 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-hostproc\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.250266 kubelet[1863]: I0517 00:52:39.250257 1863 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-lib-modules\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.252555 kubelet[1863]: I0517 00:52:39.248980 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.252673 kubelet[1863]: I0517 00:52:39.249105 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.252744 kubelet[1863]: I0517 00:52:39.249235 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.252799 kubelet[1863]: I0517 00:52:39.249893 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.252871 kubelet[1863]: I0517 00:52:39.250348 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.252936 kubelet[1863]: I0517 00:52:39.250364 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cni-path" (OuterVolumeSpecName: "cni-path") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.252993 kubelet[1863]: I0517 00:52:39.252480 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:52:39.253066 kubelet[1863]: I0517 00:52:39.252510 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.253344 systemd[1]: var-lib-kubelet-pods-a85e6f78\x2da465\x2d42ee\x2db60e\x2ddf931819be2c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:52:39.253786 kubelet[1863]: I0517 00:52:39.252532 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:39.256061 kubelet[1863]: I0517 00:52:39.256033 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:39.257840 systemd[1]: var-lib-kubelet-pods-a85e6f78\x2da465\x2d42ee\x2db60e\x2ddf931819be2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9csq4.mount: Deactivated successfully. May 17 00:52:39.260418 systemd[1]: var-lib-kubelet-pods-a85e6f78\x2da465\x2d42ee\x2db60e\x2ddf931819be2c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:52:39.261773 kubelet[1863]: I0517 00:52:39.261739 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-kube-api-access-9csq4" (OuterVolumeSpecName: "kube-api-access-9csq4") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "kube-api-access-9csq4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:39.261883 kubelet[1863]: I0517 00:52:39.261743 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a85e6f78-a465-42ee-b60e-df931819be2c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a85e6f78-a465-42ee-b60e-df931819be2c" (UID: "a85e6f78-a465-42ee-b60e-df931819be2c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:52:39.351214 kubelet[1863]: E0517 00:52:39.351119 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:39.351556 kubelet[1863]: I0517 00:52:39.351534 1863 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9csq4\" (UniqueName: \"kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-kube-api-access-9csq4\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.351660 kubelet[1863]: I0517 00:52:39.351649 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-cgroup\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.351733 kubelet[1863]: I0517 00:52:39.351724 1863 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cni-path\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.351800 kubelet[1863]: I0517 00:52:39.351791 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-config-path\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.351863 kubelet[1863]: I0517 00:52:39.351854 1863 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-xtables-lock\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.351921 kubelet[1863]: I0517 00:52:39.351912 1863 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-kernel\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.351995 kubelet[1863]: I0517 00:52:39.351985 1863 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a85e6f78-a465-42ee-b60e-df931819be2c-clustermesh-secrets\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.352050 kubelet[1863]: I0517 00:52:39.352041 1863 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-etc-cni-netd\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.352135 kubelet[1863]: I0517 00:52:39.352125 1863 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-bpf-maps\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.352206 kubelet[1863]: I0517 00:52:39.352196 1863 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-host-proc-sys-net\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.352265 kubelet[1863]: I0517 00:52:39.352256 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a85e6f78-a465-42ee-b60e-df931819be2c-cilium-run\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.352333 kubelet[1863]: I0517 00:52:39.352314 1863 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a85e6f78-a465-42ee-b60e-df931819be2c-hubble-tls\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:39.412777 kubelet[1863]: E0517 00:52:39.412752 1863 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:52:39.532372 kubelet[1863]: I0517 00:52:39.532338 1863 scope.go:117] "RemoveContainer" containerID="4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7" May 17 00:52:39.534389 env[1446]: time="2025-05-17T00:52:39.534103221Z" level=info msg="RemoveContainer for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\"" May 17 00:52:39.536258 systemd[1]: Removed slice kubepods-burstable-poda85e6f78_a465_42ee_b60e_df931819be2c.slice. May 17 00:52:39.536342 systemd[1]: kubepods-burstable-poda85e6f78_a465_42ee_b60e_df931819be2c.slice: Consumed 5.987s CPU time. May 17 00:52:39.548922 env[1446]: time="2025-05-17T00:52:39.548827693Z" level=info msg="RemoveContainer for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" returns successfully" May 17 00:52:39.549103 kubelet[1863]: I0517 00:52:39.549062 1863 scope.go:117] "RemoveContainer" containerID="a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f" May 17 00:52:39.550030 env[1446]: time="2025-05-17T00:52:39.550003354Z" level=info msg="RemoveContainer for \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\"" May 17 00:52:39.559920 env[1446]: time="2025-05-17T00:52:39.559888827Z" level=info msg="RemoveContainer for \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\" returns successfully" May 17 00:52:39.560222 kubelet[1863]: I0517 00:52:39.560193 1863 scope.go:117] "RemoveContainer" containerID="f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0" May 17 00:52:39.561203 env[1446]: time="2025-05-17T00:52:39.561179486Z" level=info msg="RemoveContainer for \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\"" May 17 00:52:39.573970 env[1446]: time="2025-05-17T00:52:39.573939191Z" level=info msg="RemoveContainer for \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\" returns successfully" May 17 00:52:39.574298 kubelet[1863]: I0517 00:52:39.574282 1863 scope.go:117] "RemoveContainer" containerID="3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15" May 17 00:52:39.575436 env[1446]: time="2025-05-17T00:52:39.575402767Z" level=info msg="RemoveContainer for \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\"" May 17 00:52:39.585471 env[1446]: time="2025-05-17T00:52:39.585445038Z" level=info msg="RemoveContainer for \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\" returns successfully" May 17 00:52:39.585732 kubelet[1863]: I0517 00:52:39.585714 1863 scope.go:117] "RemoveContainer" containerID="292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507" May 17 00:52:39.586860 env[1446]: time="2025-05-17T00:52:39.586837094Z" level=info msg="RemoveContainer for \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\"" May 17 00:52:39.595154 env[1446]: time="2025-05-17T00:52:39.595129075Z" level=info msg="RemoveContainer for \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\" returns successfully" May 17 00:52:39.595489 kubelet[1863]: I0517 00:52:39.595397 1863 scope.go:117] "RemoveContainer" containerID="4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7" May 17 00:52:39.595669 env[1446]: time="2025-05-17T00:52:39.595594187Z" level=error msg="ContainerStatus for \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\": not found" May 17 00:52:39.595841 kubelet[1863]: E0517 00:52:39.595784 1863 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\": not found" containerID="4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7" May 17 00:52:39.595923 kubelet[1863]: I0517 00:52:39.595866 1863 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7"} err="failed to get container status \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d972ec9f875b60ccbb2543b716f3aabdee8ecc829a0995e57976913ceb7f9d7\": not found" May 17 00:52:39.595923 kubelet[1863]: I0517 00:52:39.595922 1863 scope.go:117] "RemoveContainer" containerID="a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f" May 17 00:52:39.596168 env[1446]: time="2025-05-17T00:52:39.596116818Z" level=error msg="ContainerStatus for \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\": not found" May 17 00:52:39.596383 kubelet[1863]: E0517 00:52:39.596273 1863 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\": not found" containerID="a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f" May 17 00:52:39.596383 kubelet[1863]: I0517 00:52:39.596300 1863 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f"} err="failed to get container status \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4c1db91708047bc600b41dafdf288e757a02f567ff675c32582c5875800a53f\": not found" May 17 00:52:39.596383 kubelet[1863]: I0517 00:52:39.596315 1863 scope.go:117] "RemoveContainer" containerID="f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0" May 17 00:52:39.596504 env[1446]: time="2025-05-17T00:52:39.596456133Z" level=error msg="ContainerStatus for \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\": not found" May 17 00:52:39.596757 kubelet[1863]: E0517 00:52:39.596733 1863 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\": not found" containerID="f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0" May 17 00:52:39.596796 kubelet[1863]: I0517 00:52:39.596762 1863 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0"} err="failed to get container status \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1c4acc4e37df6814a582d92c26cdbb6d2fb99d4521c46b88951ae4e85d641a0\": not found" May 17 00:52:39.596796 kubelet[1863]: I0517 00:52:39.596778 1863 scope.go:117] "RemoveContainer" containerID="3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15" May 17 00:52:39.597065 env[1446]: time="2025-05-17T00:52:39.597015603Z" level=error msg="ContainerStatus for \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\": not found" May 17 00:52:39.597241 kubelet[1863]: E0517 00:52:39.597207 1863 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\": not found" containerID="3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15" May 17 00:52:39.597285 kubelet[1863]: I0517 00:52:39.597247 1863 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15"} err="failed to get container status \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ed7eb75d16279d9ac1ad8d2d2e14081bf5b3480bcb8263145577190a093ce15\": not found" May 17 00:52:39.597285 kubelet[1863]: I0517 00:52:39.597262 1863 scope.go:117] "RemoveContainer" containerID="292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507" May 17 00:52:39.597476 env[1446]: time="2025-05-17T00:52:39.597424676Z" level=error msg="ContainerStatus for \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\": not found" May 17 00:52:39.597624 kubelet[1863]: E0517 00:52:39.597585 1863 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\": not found" containerID="292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507" May 17 00:52:39.597624 kubelet[1863]: I0517 00:52:39.597608 1863 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507"} err="failed to get container status \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\": rpc error: code = NotFound desc = an error occurred when try to find container \"292f5a35bdce1b9ce00c156f7c67676a170b40bc0e7e01f7b99f04765901f507\": not found" May 17 00:52:40.351424 kubelet[1863]: E0517 00:52:40.351388 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:40.422159 kubelet[1863]: I0517 00:52:40.422120 1863 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85e6f78-a465-42ee-b60e-df931819be2c" path="/var/lib/kubelet/pods/a85e6f78-a465-42ee-b60e-df931819be2c/volumes" May 17 00:52:41.352330 kubelet[1863]: E0517 00:52:41.352292 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:41.798349 systemd[1]: Created slice kubepods-besteffort-podea43d2e4_7126_4e98_b1bc_15d71a5b0339.slice. May 17 00:52:41.826968 systemd[1]: Created slice kubepods-burstable-podb443de56_4da9_4286_ba17_1d1ade6c058b.slice. May 17 00:52:41.867968 kubelet[1863]: I0517 00:52:41.867930 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-lib-modules\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.867968 kubelet[1863]: I0517 00:52:41.867968 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-clustermesh-secrets\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868177 kubelet[1863]: I0517 00:52:41.867993 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-cgroup\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868177 kubelet[1863]: I0517 00:52:41.868008 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cni-path\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868177 kubelet[1863]: I0517 00:52:41.868023 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-etc-cni-netd\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868177 kubelet[1863]: I0517 00:52:41.868040 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-net\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868177 kubelet[1863]: I0517 00:52:41.868054 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-kernel\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868177 kubelet[1863]: I0517 00:52:41.868073 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-hubble-tls\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868319 kubelet[1863]: I0517 00:52:41.868105 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htj9l\" (UniqueName: \"kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-kube-api-access-htj9l\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868319 kubelet[1863]: I0517 00:52:41.868124 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-bpf-maps\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868319 kubelet[1863]: I0517 00:52:41.868138 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-xtables-lock\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868319 kubelet[1863]: I0517 00:52:41.868153 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-config-path\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868319 kubelet[1863]: I0517 00:52:41.868170 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-ipsec-secrets\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868432 kubelet[1863]: I0517 00:52:41.868189 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea43d2e4-7126-4e98-b1bc-15d71a5b0339-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-j857f\" (UID: \"ea43d2e4-7126-4e98-b1bc-15d71a5b0339\") " pod="kube-system/cilium-operator-6c4d7847fc-j857f" May 17 00:52:41.868432 kubelet[1863]: I0517 00:52:41.868207 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-run\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:41.868432 kubelet[1863]: I0517 00:52:41.868225 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2596x\" (UniqueName: \"kubernetes.io/projected/ea43d2e4-7126-4e98-b1bc-15d71a5b0339-kube-api-access-2596x\") pod \"cilium-operator-6c4d7847fc-j857f\" (UID: \"ea43d2e4-7126-4e98-b1bc-15d71a5b0339\") " pod="kube-system/cilium-operator-6c4d7847fc-j857f" May 17 00:52:41.868432 kubelet[1863]: I0517 00:52:41.868241 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-hostproc\") pod \"cilium-kg459\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " pod="kube-system/cilium-kg459" May 17 00:52:42.103039 env[1446]: time="2025-05-17T00:52:42.101758589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j857f,Uid:ea43d2e4-7126-4e98-b1bc-15d71a5b0339,Namespace:kube-system,Attempt:0,}" May 17 00:52:42.134782 env[1446]: time="2025-05-17T00:52:42.134721193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kg459,Uid:b443de56-4da9-4286-ba17-1d1ade6c058b,Namespace:kube-system,Attempt:0,}" May 17 00:52:42.138552 env[1446]: time="2025-05-17T00:52:42.138489734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:42.138728 env[1446]: time="2025-05-17T00:52:42.138530093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:42.138728 env[1446]: time="2025-05-17T00:52:42.138550813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:42.138830 env[1446]: time="2025-05-17T00:52:42.138754129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbcdfc96e1d17deb5f1d4c1e7150d191ec34b2826c60595652010f05f9552947 pid=3423 runtime=io.containerd.runc.v2 May 17 00:52:42.150518 systemd[1]: Started cri-containerd-bbcdfc96e1d17deb5f1d4c1e7150d191ec34b2826c60595652010f05f9552947.scope. May 17 00:52:42.173820 env[1446]: time="2025-05-17T00:52:42.173752261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:42.174133 env[1446]: time="2025-05-17T00:52:42.174018177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:42.174252 env[1446]: time="2025-05-17T00:52:42.174229814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:42.174601 env[1446]: time="2025-05-17T00:52:42.174555529Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32 pid=3456 runtime=io.containerd.runc.v2 May 17 00:52:42.190352 systemd[1]: Started cri-containerd-12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32.scope. May 17 00:52:42.198323 env[1446]: time="2025-05-17T00:52:42.198237838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j857f,Uid:ea43d2e4-7126-4e98-b1bc-15d71a5b0339,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbcdfc96e1d17deb5f1d4c1e7150d191ec34b2826c60595652010f05f9552947\"" May 17 00:52:42.200551 env[1446]: time="2025-05-17T00:52:42.200521642Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:52:42.219292 env[1446]: time="2025-05-17T00:52:42.219255909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kg459,Uid:b443de56-4da9-4286-ba17-1d1ade6c058b,Namespace:kube-system,Attempt:0,} returns sandbox id \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\"" May 17 00:52:42.227473 env[1446]: time="2025-05-17T00:52:42.227432660Z" level=info msg="CreateContainer within sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:52:42.294100 env[1446]: time="2025-05-17T00:52:42.294037297Z" level=info msg="CreateContainer within sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\"" May 17 00:52:42.294645 env[1446]: time="2025-05-17T00:52:42.294618928Z" level=info msg="StartContainer for \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\"" May 17 00:52:42.308559 systemd[1]: Started cri-containerd-acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654.scope. May 17 00:52:42.318967 systemd[1]: cri-containerd-acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654.scope: Deactivated successfully. May 17 00:52:42.319255 systemd[1]: Stopped cri-containerd-acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654.scope. May 17 00:52:42.353513 kubelet[1863]: E0517 00:52:42.353406 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:42.358101 env[1446]: time="2025-05-17T00:52:42.358041015Z" level=info msg="shim disconnected" id=acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654 May 17 00:52:42.358263 env[1446]: time="2025-05-17T00:52:42.358241812Z" level=warning msg="cleaning up after shim disconnected" id=acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654 namespace=k8s.io May 17 00:52:42.358327 env[1446]: time="2025-05-17T00:52:42.358314250Z" level=info msg="cleaning up dead shim" May 17 00:52:42.365487 env[1446]: time="2025-05-17T00:52:42.365448299Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3519 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:52:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:52:42.365932 env[1446]: time="2025-05-17T00:52:42.365821933Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" May 17 00:52:42.368490 env[1446]: time="2025-05-17T00:52:42.368446132Z" level=error msg="Failed to pipe stdout of container \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\"" error="reading from a closed fifo" May 17 00:52:42.368553 env[1446]: time="2025-05-17T00:52:42.368516531Z" level=error msg="Failed to pipe stderr of container \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\"" error="reading from a closed fifo" May 17 00:52:42.372756 env[1446]: time="2025-05-17T00:52:42.372705425Z" level=error msg="StartContainer for \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:52:42.372973 kubelet[1863]: E0517 00:52:42.372936 1863 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654" May 17 00:52:42.373167 kubelet[1863]: E0517 00:52:42.373140 1863 kuberuntime_manager.go:1358] "Unhandled Error" err=< May 17 00:52:42.373167 kubelet[1863]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:52:42.373167 kubelet[1863]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:52:42.373167 kubelet[1863]: rm /hostbin/cilium-mount May 17 00:52:42.373286 kubelet[1863]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htj9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kg459_kube-system(b443de56-4da9-4286-ba17-1d1ade6c058b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:52:42.373286 kubelet[1863]: > logger="UnhandledError" May 17 00:52:42.374519 kubelet[1863]: E0517 00:52:42.374486 1863 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kg459" podUID="b443de56-4da9-4286-ba17-1d1ade6c058b" May 17 00:52:42.542696 env[1446]: time="2025-05-17T00:52:42.542646483Z" level=info msg="CreateContainer within sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 17 00:52:42.583155 env[1446]: time="2025-05-17T00:52:42.583002531Z" level=info msg="CreateContainer within sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\"" May 17 00:52:42.584075 env[1446]: time="2025-05-17T00:52:42.583999636Z" level=info msg="StartContainer for \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\"" May 17 00:52:42.599407 systemd[1]: Started cri-containerd-92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d.scope. May 17 00:52:42.611663 systemd[1]: cri-containerd-92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d.scope: Deactivated successfully. May 17 00:52:42.637885 env[1446]: time="2025-05-17T00:52:42.637829192Z" level=info msg="shim disconnected" id=92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d May 17 00:52:42.638185 env[1446]: time="2025-05-17T00:52:42.638163947Z" level=warning msg="cleaning up after shim disconnected" id=92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d namespace=k8s.io May 17 00:52:42.638262 env[1446]: time="2025-05-17T00:52:42.638248146Z" level=info msg="cleaning up dead shim" May 17 00:52:42.645349 env[1446]: time="2025-05-17T00:52:42.645311755Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3555 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:52:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:52:42.645718 env[1446]: time="2025-05-17T00:52:42.645669550Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" May 17 00:52:42.646176 env[1446]: time="2025-05-17T00:52:42.645946705Z" level=error msg="Failed to pipe stderr of container \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\"" error="reading from a closed fifo" May 17 00:52:42.646276 env[1446]: time="2025-05-17T00:52:42.646139622Z" level=error msg="Failed to pipe stdout of container \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\"" error="reading from a closed fifo" May 17 00:52:42.649819 env[1446]: time="2025-05-17T00:52:42.649773005Z" level=error msg="StartContainer for \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:52:42.650103 kubelet[1863]: E0517 00:52:42.650000 1863 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d" May 17 00:52:42.650310 kubelet[1863]: E0517 00:52:42.650279 1863 kuberuntime_manager.go:1358] "Unhandled Error" err=< May 17 00:52:42.650310 kubelet[1863]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:52:42.650310 kubelet[1863]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:52:42.650310 kubelet[1863]: rm /hostbin/cilium-mount May 17 00:52:42.650310 kubelet[1863]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htj9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kg459_kube-system(b443de56-4da9-4286-ba17-1d1ade6c058b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:52:42.650310 kubelet[1863]: > logger="UnhandledError" May 17 00:52:42.651442 kubelet[1863]: E0517 00:52:42.651406 1863 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kg459" podUID="b443de56-4da9-4286-ba17-1d1ade6c058b" May 17 00:52:43.354295 kubelet[1863]: E0517 00:52:43.354250 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:43.546500 kubelet[1863]: I0517 00:52:43.546072 1863 scope.go:117] "RemoveContainer" containerID="acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654" May 17 00:52:43.546974 env[1446]: time="2025-05-17T00:52:43.546747110Z" level=info msg="StopPodSandbox for \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\"" May 17 00:52:43.546974 env[1446]: time="2025-05-17T00:52:43.546817869Z" level=info msg="Container to stop \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:43.546974 env[1446]: time="2025-05-17T00:52:43.546833029Z" level=info msg="Container to stop \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:43.548593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32-shm.mount: Deactivated successfully. May 17 00:52:43.551697 env[1446]: time="2025-05-17T00:52:43.551656835Z" level=info msg="RemoveContainer for \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\"" May 17 00:52:43.556051 systemd[1]: cri-containerd-12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32.scope: Deactivated successfully. May 17 00:52:43.570259 env[1446]: time="2025-05-17T00:52:43.570220071Z" level=info msg="RemoveContainer for \"acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654\" returns successfully" May 17 00:52:43.575186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32-rootfs.mount: Deactivated successfully. May 17 00:52:43.595316 env[1446]: time="2025-05-17T00:52:43.595271127Z" level=info msg="shim disconnected" id=12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32 May 17 00:52:43.595546 env[1446]: time="2025-05-17T00:52:43.595527843Z" level=warning msg="cleaning up after shim disconnected" id=12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32 namespace=k8s.io May 17 00:52:43.595609 env[1446]: time="2025-05-17T00:52:43.595597522Z" level=info msg="cleaning up dead shim" May 17 00:52:43.602145 env[1446]: time="2025-05-17T00:52:43.602108223Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3587 runtime=io.containerd.runc.v2\n" May 17 00:52:43.602549 env[1446]: time="2025-05-17T00:52:43.602520576Z" level=info msg="TearDown network for sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" successfully" May 17 00:52:43.602635 env[1446]: time="2025-05-17T00:52:43.602619695Z" level=info msg="StopPodSandbox for \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" returns successfully" May 17 00:52:43.682014 kubelet[1863]: I0517 00:52:43.681976 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-hubble-tls\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682014 kubelet[1863]: I0517 00:52:43.682011 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cni-path\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682014 kubelet[1863]: I0517 00:52:43.682027 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-net\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682045 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-ipsec-secrets\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682067 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-cgroup\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682091 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-kernel\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682107 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htj9l\" (UniqueName: \"kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-kube-api-access-htj9l\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682124 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-xtables-lock\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682141 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-config-path\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682166 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-run\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682181 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-hostproc\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682194 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-lib-modules\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682208 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-bpf-maps\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682225 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-clustermesh-secrets\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682242 1863 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-etc-cni-netd\") pod \"b443de56-4da9-4286-ba17-1d1ade6c058b\" (UID: \"b443de56-4da9-4286-ba17-1d1ade6c058b\") " May 17 00:52:43.682312 kubelet[1863]: I0517 00:52:43.682299 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.682737 kubelet[1863]: I0517 00:52:43.682581 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.682737 kubelet[1863]: I0517 00:52:43.682602 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cni-path" (OuterVolumeSpecName: "cni-path") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.682737 kubelet[1863]: I0517 00:52:43.682615 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.685778 kubelet[1863]: I0517 00:52:43.685645 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.685778 kubelet[1863]: I0517 00:52:43.685689 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.691242 systemd[1]: var-lib-kubelet-pods-b443de56\x2d4da9\x2d4286\x2dba17\x2d1d1ade6c058b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:52:43.694677 kubelet[1863]: I0517 00:52:43.693211 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:52:43.691340 systemd[1]: var-lib-kubelet-pods-b443de56\x2d4da9\x2d4286\x2dba17\x2d1d1ade6c058b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:52:43.697648 kubelet[1863]: I0517 00:52:43.697600 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.697736 kubelet[1863]: I0517 00:52:43.697653 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-hostproc" (OuterVolumeSpecName: "hostproc") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.697736 kubelet[1863]: I0517 00:52:43.697679 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.697736 kubelet[1863]: I0517 00:52:43.697696 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.697809 kubelet[1863]: I0517 00:52:43.697797 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:43.700740 kubelet[1863]: I0517 00:52:43.700700 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:52:43.701272 kubelet[1863]: I0517 00:52:43.701236 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-kube-api-access-htj9l" (OuterVolumeSpecName: "kube-api-access-htj9l") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "kube-api-access-htj9l". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:43.703895 kubelet[1863]: I0517 00:52:43.703862 1863 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b443de56-4da9-4286-ba17-1d1ade6c058b" (UID: "b443de56-4da9-4286-ba17-1d1ade6c058b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:52:43.783204 kubelet[1863]: I0517 00:52:43.783167 1863 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-net\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783204 kubelet[1863]: I0517 00:52:43.783194 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-ipsec-secrets\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783204 kubelet[1863]: I0517 00:52:43.783205 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-cgroup\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783204 kubelet[1863]: I0517 00:52:43.783214 1863 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-host-proc-sys-kernel\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783223 1863 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-htj9l\" (UniqueName: \"kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-kube-api-access-htj9l\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783231 1863 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-xtables-lock\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783238 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-config-path\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783246 1863 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cilium-run\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783254 1863 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-hostproc\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783262 1863 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-lib-modules\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783269 1863 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-bpf-maps\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783276 1863 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b443de56-4da9-4286-ba17-1d1ade6c058b-clustermesh-secrets\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783283 1863 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-etc-cni-netd\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783290 1863 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b443de56-4da9-4286-ba17-1d1ade6c058b-hubble-tls\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.783416 kubelet[1863]: I0517 00:52:43.783296 1863 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b443de56-4da9-4286-ba17-1d1ade6c058b-cni-path\") on node \"10.200.20.35\" DevicePath \"\"" May 17 00:52:43.974174 systemd[1]: var-lib-kubelet-pods-b443de56\x2d4da9\x2d4286\x2dba17\x2d1d1ade6c058b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhtj9l.mount: Deactivated successfully. May 17 00:52:43.974262 systemd[1]: var-lib-kubelet-pods-b443de56\x2d4da9\x2d4286\x2dba17\x2d1d1ade6c058b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:52:44.309265 kubelet[1863]: E0517 00:52:44.309153 1863 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:44.335775 kubelet[1863]: I0517 00:52:44.335516 1863 scope.go:117] "RemoveContainer" containerID="92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d" May 17 00:52:44.337030 env[1446]: time="2025-05-17T00:52:44.336997452Z" level=info msg="RemoveContainer for \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\"" May 17 00:52:44.353492 env[1446]: time="2025-05-17T00:52:44.353457685Z" level=info msg="RemoveContainer for \"92409f883335783526d5ac02a5d04112539d8c6207ef347a71948f48018f6a6d\" returns successfully" May 17 00:52:44.354460 kubelet[1863]: E0517 00:52:44.354417 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:44.355149 env[1446]: time="2025-05-17T00:52:44.355118661Z" level=info msg="StopPodSandbox for \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\"" May 17 00:52:44.355230 env[1446]: time="2025-05-17T00:52:44.355188780Z" level=info msg="TearDown network for sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" successfully" May 17 00:52:44.355230 env[1446]: time="2025-05-17T00:52:44.355218259Z" level=info msg="StopPodSandbox for \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" returns successfully" May 17 00:52:44.355606 env[1446]: time="2025-05-17T00:52:44.355583094Z" level=info msg="RemovePodSandbox for \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\"" May 17 00:52:44.355669 env[1446]: time="2025-05-17T00:52:44.355606733Z" level=info msg="Forcibly stopping sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\"" May 17 00:52:44.355669 env[1446]: time="2025-05-17T00:52:44.355656453Z" level=info msg="TearDown network for sandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" successfully" May 17 00:52:44.365039 env[1446]: time="2025-05-17T00:52:44.364998993Z" level=info msg="RemovePodSandbox \"6e286fedfbeab75e4da32f09e375667cafc10bb0dd67e03c70ddfc0c2e7440f1\" returns successfully" May 17 00:52:44.365450 env[1446]: time="2025-05-17T00:52:44.365383107Z" level=info msg="StopPodSandbox for \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\"" May 17 00:52:44.365533 env[1446]: time="2025-05-17T00:52:44.365493785Z" level=info msg="TearDown network for sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" successfully" May 17 00:52:44.365574 env[1446]: time="2025-05-17T00:52:44.365528345Z" level=info msg="StopPodSandbox for \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" returns successfully" May 17 00:52:44.365825 env[1446]: time="2025-05-17T00:52:44.365793941Z" level=info msg="RemovePodSandbox for \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\"" May 17 00:52:44.365884 env[1446]: time="2025-05-17T00:52:44.365819501Z" level=info msg="Forcibly stopping sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\"" May 17 00:52:44.365884 env[1446]: time="2025-05-17T00:52:44.365871140Z" level=info msg="TearDown network for sandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" successfully" May 17 00:52:44.373098 env[1446]: time="2025-05-17T00:52:44.373057712Z" level=info msg="RemovePodSandbox \"12ff032fcbcc4b09dc30ac8c4c8e9fe7cee3d8cf68ae09d8378f2dcf20f22c32\" returns successfully" May 17 00:52:44.413464 kubelet[1863]: E0517 00:52:44.413398 1863 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:52:44.424932 systemd[1]: Removed slice kubepods-burstable-podb443de56_4da9_4286_ba17_1d1ade6c058b.slice. May 17 00:52:44.507111 env[1446]: time="2025-05-17T00:52:44.507051028Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:44.528249 env[1446]: time="2025-05-17T00:52:44.528213831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:44.533466 env[1446]: time="2025-05-17T00:52:44.533439873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:52:44.534071 env[1446]: time="2025-05-17T00:52:44.534047224Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:52:44.541506 env[1446]: time="2025-05-17T00:52:44.541476593Z" level=info msg="CreateContainer within sandbox \"bbcdfc96e1d17deb5f1d4c1e7150d191ec34b2826c60595652010f05f9552947\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:52:44.574488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053095691.mount: Deactivated successfully. May 17 00:52:44.579918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678080839.mount: Deactivated successfully. May 17 00:52:44.610444 systemd[1]: Created slice kubepods-burstable-pod22edbf2f_f9c8_4606_b71f_ce3f9b9729e8.slice. May 17 00:52:44.614252 env[1446]: time="2025-05-17T00:52:44.614205865Z" level=info msg="CreateContainer within sandbox \"bbcdfc96e1d17deb5f1d4c1e7150d191ec34b2826c60595652010f05f9552947\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"906858532afc35ce5beae34cdabb0732c8804fe2445e249549b6df08ae2c8566\"" May 17 00:52:44.615049 env[1446]: time="2025-05-17T00:52:44.615022613Z" level=info msg="StartContainer for \"906858532afc35ce5beae34cdabb0732c8804fe2445e249549b6df08ae2c8566\"" May 17 00:52:44.628469 systemd[1]: Started cri-containerd-906858532afc35ce5beae34cdabb0732c8804fe2445e249549b6df08ae2c8566.scope. May 17 00:52:44.660610 env[1446]: time="2025-05-17T00:52:44.660569772Z" level=info msg="StartContainer for \"906858532afc35ce5beae34cdabb0732c8804fe2445e249549b6df08ae2c8566\" returns successfully" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687203 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-clustermesh-secrets\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687238 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-host-proc-sys-net\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687268 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-cilium-cgroup\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687282 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-cni-path\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687296 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-xtables-lock\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687312 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-cilium-ipsec-secrets\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687338 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-hubble-tls\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687353 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-etc-cni-netd\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687368 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-lib-modules\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687381 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-bpf-maps\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687405 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-cilium-config-path\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687425 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-hostproc\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687441 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-cilium-run\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687462 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-host-proc-sys-kernel\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.687645 kubelet[1863]: I0517 00:52:44.687488 1863 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpfqq\" (UniqueName: \"kubernetes.io/projected/22edbf2f-f9c8-4606-b71f-ce3f9b9729e8-kube-api-access-wpfqq\") pod \"cilium-2d7bh\" (UID: \"22edbf2f-f9c8-4606-b71f-ce3f9b9729e8\") " pod="kube-system/cilium-2d7bh" May 17 00:52:44.917975 env[1446]: time="2025-05-17T00:52:44.917919522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2d7bh,Uid:22edbf2f-f9c8-4606-b71f-ce3f9b9729e8,Namespace:kube-system,Attempt:0,}" May 17 00:52:44.967911 env[1446]: time="2025-05-17T00:52:44.967836975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:44.968043 env[1446]: time="2025-05-17T00:52:44.967922254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:44.968043 env[1446]: time="2025-05-17T00:52:44.967947934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:44.968276 env[1446]: time="2025-05-17T00:52:44.968223410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a pid=3658 runtime=io.containerd.runc.v2 May 17 00:52:44.991227 systemd[1]: run-containerd-runc-k8s.io-5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a-runc.Vk9pnG.mount: Deactivated successfully. May 17 00:52:44.993702 systemd[1]: Started cri-containerd-5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a.scope. May 17 00:52:45.015770 env[1446]: time="2025-05-17T00:52:45.015660145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2d7bh,Uid:22edbf2f-f9c8-4606-b71f-ce3f9b9729e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\"" May 17 00:52:45.024443 env[1446]: time="2025-05-17T00:52:45.024399457Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:52:45.070247 env[1446]: time="2025-05-17T00:52:45.070199507Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4\"" May 17 00:52:45.071149 env[1446]: time="2025-05-17T00:52:45.071106814Z" level=info msg="StartContainer for \"b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4\"" May 17 00:52:45.084114 systemd[1]: Started cri-containerd-b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4.scope. May 17 00:52:45.111788 env[1446]: time="2025-05-17T00:52:45.111741540Z" level=info msg="StartContainer for \"b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4\" returns successfully" May 17 00:52:45.116036 systemd[1]: cri-containerd-b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4.scope: Deactivated successfully. May 17 00:52:45.384091 kubelet[1863]: E0517 00:52:45.355306 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:45.447024 env[1446]: time="2025-05-17T00:52:45.446972958Z" level=info msg="shim disconnected" id=b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4 May 17 00:52:45.447024 env[1446]: time="2025-05-17T00:52:45.447019397Z" level=warning msg="cleaning up after shim disconnected" id=b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4 namespace=k8s.io May 17 00:52:45.447024 env[1446]: time="2025-05-17T00:52:45.447028637Z" level=info msg="cleaning up dead shim" May 17 00:52:45.453685 env[1446]: time="2025-05-17T00:52:45.453643420Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3740 runtime=io.containerd.runc.v2\n" May 17 00:52:45.463122 kubelet[1863]: W0517 00:52:45.462355 1863 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb443de56_4da9_4286_ba17_1d1ade6c058b.slice/cri-containerd-acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654.scope WatchSource:0}: container "acc7476ae6eb2aa2eaf73b8225c3e3fc5110cf735a1c566b4116e957e3f21654" in namespace "k8s.io": not found May 17 00:52:45.557719 env[1446]: time="2025-05-17T00:52:45.557674259Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:52:45.579746 kubelet[1863]: I0517 00:52:45.579700 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-j857f" podStartSLOduration=2.244272464 podStartE2EDuration="4.579649818s" podCreationTimestamp="2025-05-17 00:52:41 +0000 UTC" firstStartedPulling="2025-05-17 00:52:42.199964211 +0000 UTC m=+59.357176958" lastFinishedPulling="2025-05-17 00:52:44.535341565 +0000 UTC m=+61.692554312" observedRunningTime="2025-05-17 00:52:45.56311066 +0000 UTC m=+62.720323407" watchObservedRunningTime="2025-05-17 00:52:45.579649818 +0000 UTC m=+62.736862565" May 17 00:52:45.600193 env[1446]: time="2025-05-17T00:52:45.600143958Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4\"" May 17 00:52:45.600826 env[1446]: time="2025-05-17T00:52:45.600802629Z" level=info msg="StartContainer for \"232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4\"" May 17 00:52:45.613646 systemd[1]: Started cri-containerd-232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4.scope. May 17 00:52:45.641483 env[1446]: time="2025-05-17T00:52:45.641377195Z" level=info msg="StartContainer for \"232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4\" returns successfully" May 17 00:52:45.644776 systemd[1]: cri-containerd-232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4.scope: Deactivated successfully. May 17 00:52:45.679879 env[1446]: time="2025-05-17T00:52:45.679828553Z" level=info msg="shim disconnected" id=232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4 May 17 00:52:45.679879 env[1446]: time="2025-05-17T00:52:45.679881712Z" level=warning msg="cleaning up after shim disconnected" id=232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4 namespace=k8s.io May 17 00:52:45.680113 env[1446]: time="2025-05-17T00:52:45.679891112Z" level=info msg="cleaning up dead shim" May 17 00:52:45.686562 env[1446]: time="2025-05-17T00:52:45.686519855Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\n" May 17 00:52:45.892507 kubelet[1863]: I0517 00:52:45.891988 1863 setters.go:618] "Node became not ready" node="10.200.20.35" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:52:45Z","lastTransitionTime":"2025-05-17T00:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:52:45.974378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529131236.mount: Deactivated successfully. May 17 00:52:46.356304 kubelet[1863]: E0517 00:52:46.356200 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:46.422839 kubelet[1863]: I0517 00:52:46.422801 1863 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b443de56-4da9-4286-ba17-1d1ade6c058b" path="/var/lib/kubelet/pods/b443de56-4da9-4286-ba17-1d1ade6c058b/volumes" May 17 00:52:46.565558 env[1446]: time="2025-05-17T00:52:46.565513983Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:52:46.600219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677726547.mount: Deactivated successfully. May 17 00:52:46.608647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502272876.mount: Deactivated successfully. May 17 00:52:46.627858 env[1446]: time="2025-05-17T00:52:46.627810132Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a\"" May 17 00:52:46.629078 env[1446]: time="2025-05-17T00:52:46.628314445Z" level=info msg="StartContainer for \"47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a\"" May 17 00:52:46.643112 systemd[1]: Started cri-containerd-47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a.scope. May 17 00:52:46.668613 systemd[1]: cri-containerd-47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a.scope: Deactivated successfully. May 17 00:52:46.670953 env[1446]: time="2025-05-17T00:52:46.670906876Z" level=info msg="StartContainer for \"47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a\" returns successfully" May 17 00:52:46.702348 env[1446]: time="2025-05-17T00:52:46.702284627Z" level=info msg="shim disconnected" id=47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a May 17 00:52:46.702348 env[1446]: time="2025-05-17T00:52:46.702332147Z" level=warning msg="cleaning up after shim disconnected" id=47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a namespace=k8s.io May 17 00:52:46.702348 env[1446]: time="2025-05-17T00:52:46.702340827Z" level=info msg="cleaning up dead shim" May 17 00:52:46.708670 env[1446]: time="2025-05-17T00:52:46.708623617Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" May 17 00:52:47.356718 kubelet[1863]: E0517 00:52:47.356682 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:47.571115 env[1446]: time="2025-05-17T00:52:47.571058385Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:52:47.599766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356625746.mount: Deactivated successfully. May 17 00:52:47.615944 env[1446]: time="2025-05-17T00:52:47.615602842Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b\"" May 17 00:52:47.616556 env[1446]: time="2025-05-17T00:52:47.616493790Z" level=info msg="StartContainer for \"f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b\"" May 17 00:52:47.633325 systemd[1]: Started cri-containerd-f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b.scope. May 17 00:52:47.658022 systemd[1]: cri-containerd-f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b.scope: Deactivated successfully. May 17 00:52:47.660713 env[1446]: time="2025-05-17T00:52:47.660679452Z" level=info msg="StartContainer for \"f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b\" returns successfully" May 17 00:52:47.688840 env[1446]: time="2025-05-17T00:52:47.688799459Z" level=info msg="shim disconnected" id=f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b May 17 00:52:47.689245 env[1446]: time="2025-05-17T00:52:47.689225173Z" level=warning msg="cleaning up after shim disconnected" id=f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b namespace=k8s.io May 17 00:52:47.689316 env[1446]: time="2025-05-17T00:52:47.689303492Z" level=info msg="cleaning up dead shim" May 17 00:52:47.696003 env[1446]: time="2025-05-17T00:52:47.695973479Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3916 runtime=io.containerd.runc.v2\n" May 17 00:52:47.975858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b-rootfs.mount: Deactivated successfully. May 17 00:52:48.358102 kubelet[1863]: E0517 00:52:48.357718 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:48.570657 env[1446]: time="2025-05-17T00:52:48.570610863Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:52:48.580072 kubelet[1863]: W0517 00:52:48.580038 1863 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22edbf2f_f9c8_4606_b71f_ce3f9b9729e8.slice/cri-containerd-b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4.scope WatchSource:0}: task b979b12c28661def08d4392543fb9292a5302c211c93b5ef917ede4a091356e4 not found May 17 00:52:48.600640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151067199.mount: Deactivated successfully. May 17 00:52:48.605759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338390643.mount: Deactivated successfully. May 17 00:52:48.619531 env[1446]: time="2025-05-17T00:52:48.619443235Z" level=info msg="CreateContainer within sandbox \"5295be22b20b0f34c09951ed6a64a05621bb02312c51b0cb8ab43d0bebe0900a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda\"" May 17 00:52:48.620247 env[1446]: time="2025-05-17T00:52:48.620218825Z" level=info msg="StartContainer for \"3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda\"" May 17 00:52:48.634065 systemd[1]: Started cri-containerd-3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda.scope. May 17 00:52:48.663935 env[1446]: time="2025-05-17T00:52:48.663885988Z" level=info msg="StartContainer for \"3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda\" returns successfully" May 17 00:52:48.958182 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 17 00:52:49.358522 kubelet[1863]: E0517 00:52:49.358386 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:49.586980 kubelet[1863]: I0517 00:52:49.586924 1863 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2d7bh" podStartSLOduration=5.586908537 podStartE2EDuration="5.586908537s" podCreationTimestamp="2025-05-17 00:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:52:49.585810111 +0000 UTC m=+66.743022858" watchObservedRunningTime="2025-05-17 00:52:49.586908537 +0000 UTC m=+66.744121284" May 17 00:52:49.893431 systemd[1]: run-containerd-runc-k8s.io-3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda-runc.j0qNrr.mount: Deactivated successfully. May 17 00:52:50.359000 kubelet[1863]: E0517 00:52:50.358879 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:51.359429 kubelet[1863]: E0517 00:52:51.359395 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:51.582571 systemd-networkd[1608]: lxc_health: Link UP May 17 00:52:51.603160 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:52:51.604252 systemd-networkd[1608]: lxc_health: Gained carrier May 17 00:52:51.688108 kubelet[1863]: W0517 00:52:51.687590 1863 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22edbf2f_f9c8_4606_b71f_ce3f9b9729e8.slice/cri-containerd-232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4.scope WatchSource:0}: task 232d0fb752a25b99a26c71b91fd70ac0c8b7112eac6bfbd419ea22ba3703c8f4 not found May 17 00:52:52.044776 systemd[1]: run-containerd-runc-k8s.io-3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda-runc.lITctZ.mount: Deactivated successfully. May 17 00:52:52.360470 kubelet[1863]: E0517 00:52:52.360345 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:52.821311 systemd-networkd[1608]: lxc_health: Gained IPv6LL May 17 00:52:53.361554 kubelet[1863]: E0517 00:52:53.361500 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:54.201796 systemd[1]: run-containerd-runc-k8s.io-3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda-runc.nlcefz.mount: Deactivated successfully. May 17 00:52:54.362368 kubelet[1863]: E0517 00:52:54.362318 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:54.795527 kubelet[1863]: W0517 00:52:54.795489 1863 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22edbf2f_f9c8_4606_b71f_ce3f9b9729e8.slice/cri-containerd-47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a.scope WatchSource:0}: task 47338096c06d9664596c5dfd8e531cc20d0d4abe4a8f59e83a578b1445ac241a not found May 17 00:52:55.362739 kubelet[1863]: E0517 00:52:55.362709 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:56.326499 systemd[1]: run-containerd-runc-k8s.io-3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda-runc.xRZhXR.mount: Deactivated successfully. May 17 00:52:56.363566 kubelet[1863]: E0517 00:52:56.363525 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:57.364687 kubelet[1863]: E0517 00:52:57.364653 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:57.901554 kubelet[1863]: W0517 00:52:57.901520 1863 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22edbf2f_f9c8_4606_b71f_ce3f9b9729e8.slice/cri-containerd-f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b.scope WatchSource:0}: task f76ad03f036b96ae5b3f9539753ddf7ae9d225de9b45ac2cd8f095e8da69463b not found May 17 00:52:58.366456 kubelet[1863]: E0517 00:52:58.366123 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:52:58.435602 systemd[1]: run-containerd-runc-k8s.io-3c09c187ea518f22d516b3f25eabe5e33258ff6a5bc7562a04751cab02583eda-runc.E2St30.mount: Deactivated successfully. May 17 00:52:59.367624 kubelet[1863]: E0517 00:52:59.367562 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:00.367893 kubelet[1863]: E0517 00:53:00.367854 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:01.368570 kubelet[1863]: E0517 00:53:01.368511 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:53:02.369078 kubelet[1863]: E0517 00:53:02.369039 1863 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"