May 17 00:48:24.011376 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:48:24.011393 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri May 16 23:24:21 -00 2025 May 17 00:48:24.011401 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 17 00:48:24.011409 kernel: printk: bootconsole [pl11] enabled May 17 00:48:24.011414 kernel: efi: EFI v2.70 by EDK II May 17 00:48:24.011419 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 May 17 00:48:24.011426 kernel: random: crng init done May 17 00:48:24.011432 kernel: ACPI: Early table checksum verification disabled May 17 00:48:24.011437 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 17 00:48:24.011442 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011448 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011453 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 17 00:48:24.011460 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011465 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011472 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011478 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011484 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011491 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011498 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 17 00:48:24.011503 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:24.011509 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 17 00:48:24.011515 kernel: NUMA: Failed to initialise from firmware May 17 00:48:24.011520 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:48:24.011526 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] May 17 00:48:24.011532 kernel: Zone ranges: May 17 00:48:24.011537 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 17 00:48:24.011543 kernel: DMA32 empty May 17 00:48:24.011548 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:48:24.011555 kernel: Movable zone start for each node May 17 00:48:24.011561 kernel: Early memory node ranges May 17 00:48:24.011566 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 17 00:48:24.011572 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 17 00:48:24.011577 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 17 00:48:24.011583 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 17 00:48:24.011589 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 17 00:48:24.011594 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 17 00:48:24.011600 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:48:24.011605 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:48:24.011611 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 17 00:48:24.011617 kernel: psci: probing for conduit method from ACPI. May 17 00:48:24.011625 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:48:24.011631 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:48:24.011638 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:48:24.011643 kernel: psci: SMC Calling Convention v1.4 May 17 00:48:24.011649 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 May 17 00:48:24.011657 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 May 17 00:48:24.011663 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 17 00:48:24.011669 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 17 00:48:24.011675 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:48:24.011681 kernel: Detected PIPT I-cache on CPU0 May 17 00:48:24.011687 kernel: CPU features: detected: GIC system register CPU interface May 17 00:48:24.011693 kernel: CPU features: detected: Hardware dirty bit management May 17 00:48:24.011699 kernel: CPU features: detected: Spectre-BHB May 17 00:48:24.011705 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:48:24.020734 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:48:24.020770 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:48:24.020783 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 17 00:48:24.020790 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:48:24.020796 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 17 00:48:24.020803 kernel: Policy zone: Normal May 17 00:48:24.020811 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:48:24.020818 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:48:24.020825 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:48:24.020831 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:48:24.020837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:48:24.020844 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) May 17 00:48:24.020850 kernel: Memory: 3986936K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207224K reserved, 0K cma-reserved) May 17 00:48:24.020858 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:48:24.020864 kernel: trace event string verifier disabled May 17 00:48:24.020870 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:48:24.020877 kernel: rcu: RCU event tracing is enabled. May 17 00:48:24.020883 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:48:24.020890 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:48:24.020896 kernel: Tracing variant of Tasks RCU enabled. May 17 00:48:24.020902 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:48:24.020908 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:48:24.020915 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:48:24.020921 kernel: GICv3: 960 SPIs implemented May 17 00:48:24.020928 kernel: GICv3: 0 Extended SPIs implemented May 17 00:48:24.020934 kernel: GICv3: Distributor has no Range Selector support May 17 00:48:24.020940 kernel: Root IRQ handler: gic_handle_irq May 17 00:48:24.020947 kernel: GICv3: 16 PPIs implemented May 17 00:48:24.020953 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 17 00:48:24.020959 kernel: ITS: No ITS available, not enabling LPIs May 17 00:48:24.020965 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:48:24.020972 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:48:24.020978 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:48:24.020984 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:48:24.020990 kernel: Console: colour dummy device 80x25 May 17 00:48:24.020999 kernel: printk: console [tty1] enabled May 17 00:48:24.021005 kernel: ACPI: Core revision 20210730 May 17 00:48:24.021013 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:48:24.021019 kernel: pid_max: default: 32768 minimum: 301 May 17 00:48:24.021026 kernel: LSM: Security Framework initializing May 17 00:48:24.021033 kernel: SELinux: Initializing. May 17 00:48:24.021039 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:48:24.021046 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:48:24.021052 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 17 00:48:24.021060 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 17 00:48:24.021066 kernel: rcu: Hierarchical SRCU implementation. May 17 00:48:24.021072 kernel: Remapping and enabling EFI services. May 17 00:48:24.021079 kernel: smp: Bringing up secondary CPUs ... May 17 00:48:24.021085 kernel: Detected PIPT I-cache on CPU1 May 17 00:48:24.021091 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 17 00:48:24.021098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:48:24.021104 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:48:24.021110 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:48:24.021116 kernel: SMP: Total of 2 processors activated. May 17 00:48:24.021124 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:48:24.021130 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 17 00:48:24.021137 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:48:24.021144 kernel: CPU features: detected: CRC32 instructions May 17 00:48:24.021150 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:48:24.021157 kernel: CPU features: detected: LSE atomic instructions May 17 00:48:24.021163 kernel: CPU features: detected: Privileged Access Never May 17 00:48:24.021170 kernel: CPU: All CPU(s) started at EL1 May 17 00:48:24.021176 kernel: alternatives: patching kernel code May 17 00:48:24.021184 kernel: devtmpfs: initialized May 17 00:48:24.021195 kernel: KASLR enabled May 17 00:48:24.021202 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:48:24.021209 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:48:24.021216 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:48:24.021223 kernel: SMBIOS 3.1.0 present. May 17 00:48:24.021230 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 17 00:48:24.021236 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:48:24.021243 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:48:24.021251 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:48:24.021258 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:48:24.021265 kernel: audit: initializing netlink subsys (disabled) May 17 00:48:24.021272 kernel: audit: type=2000 audit(0.088:1): state=initialized audit_enabled=0 res=1 May 17 00:48:24.021278 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:48:24.021285 kernel: cpuidle: using governor menu May 17 00:48:24.021292 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:48:24.021300 kernel: ASID allocator initialised with 32768 entries May 17 00:48:24.021306 kernel: ACPI: bus type PCI registered May 17 00:48:24.021313 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:48:24.021320 kernel: Serial: AMBA PL011 UART driver May 17 00:48:24.021327 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:48:24.021333 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:48:24.021340 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:48:24.021347 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:48:24.021353 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:48:24.021361 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:48:24.021368 kernel: ACPI: Added _OSI(Module Device) May 17 00:48:24.021374 kernel: ACPI: Added _OSI(Processor Device) May 17 00:48:24.021381 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:48:24.021388 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:48:24.021394 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:48:24.021401 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:48:24.021408 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:48:24.021414 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:48:24.021423 kernel: ACPI: Interpreter enabled May 17 00:48:24.021429 kernel: ACPI: Using GIC for interrupt routing May 17 00:48:24.021436 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 17 00:48:24.021442 kernel: printk: console [ttyAMA0] enabled May 17 00:48:24.021449 kernel: printk: bootconsole [pl11] disabled May 17 00:48:24.021456 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 17 00:48:24.021463 kernel: iommu: Default domain type: Translated May 17 00:48:24.021469 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:48:24.021476 kernel: vgaarb: loaded May 17 00:48:24.021483 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:48:24.021491 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:48:24.021497 kernel: PTP clock support registered May 17 00:48:24.021504 kernel: Registered efivars operations May 17 00:48:24.021510 kernel: No ACPI PMU IRQ for CPU0 May 17 00:48:24.021525 kernel: No ACPI PMU IRQ for CPU1 May 17 00:48:24.021532 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:48:24.021539 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:48:24.021545 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:48:24.021554 kernel: pnp: PnP ACPI init May 17 00:48:24.021561 kernel: pnp: PnP ACPI: found 0 devices May 17 00:48:24.021568 kernel: NET: Registered PF_INET protocol family May 17 00:48:24.021575 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:48:24.021582 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:48:24.021588 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:48:24.021595 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:48:24.021602 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:48:24.021609 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:48:24.021617 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:48:24.021623 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:48:24.021630 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:48:24.021636 kernel: PCI: CLS 0 bytes, default 64 May 17 00:48:24.021643 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 17 00:48:24.021650 kernel: kvm [1]: HYP mode not available May 17 00:48:24.021656 kernel: Initialise system trusted keyrings May 17 00:48:24.021663 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:48:24.021670 kernel: Key type asymmetric registered May 17 00:48:24.021677 kernel: Asymmetric key parser 'x509' registered May 17 00:48:24.021684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:48:24.021690 kernel: io scheduler mq-deadline registered May 17 00:48:24.021697 kernel: io scheduler kyber registered May 17 00:48:24.021703 kernel: io scheduler bfq registered May 17 00:48:24.021724 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:48:24.021732 kernel: thunder_xcv, ver 1.0 May 17 00:48:24.021739 kernel: thunder_bgx, ver 1.0 May 17 00:48:24.021746 kernel: nicpf, ver 1.0 May 17 00:48:24.021752 kernel: nicvf, ver 1.0 May 17 00:48:24.021907 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:48:24.021968 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:48:23 UTC (1747442903) May 17 00:48:24.021977 kernel: efifb: probing for efifb May 17 00:48:24.021984 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:48:24.021991 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:48:24.021997 kernel: efifb: scrolling: redraw May 17 00:48:24.022004 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:48:24.022013 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:48:24.022020 kernel: fb0: EFI VGA frame buffer device May 17 00:48:24.022027 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 17 00:48:24.022034 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:48:24.022040 kernel: NET: Registered PF_INET6 protocol family May 17 00:48:24.022047 kernel: Segment Routing with IPv6 May 17 00:48:24.022053 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:48:24.022060 kernel: NET: Registered PF_PACKET protocol family May 17 00:48:24.022066 kernel: Key type dns_resolver registered May 17 00:48:24.022073 kernel: registered taskstats version 1 May 17 00:48:24.022081 kernel: Loading compiled-in X.509 certificates May 17 00:48:24.022088 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 2fa973ae674d09a62938b8c6a2b9446b5340adb7' May 17 00:48:24.022095 kernel: Key type .fscrypt registered May 17 00:48:24.022101 kernel: Key type fscrypt-provisioning registered May 17 00:48:24.022108 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:48:24.022115 kernel: ima: Allocated hash algorithm: sha1 May 17 00:48:24.022121 kernel: ima: No architecture policies found May 17 00:48:24.022128 kernel: clk: Disabling unused clocks May 17 00:48:24.022136 kernel: Freeing unused kernel memory: 36416K May 17 00:48:24.022142 kernel: Run /init as init process May 17 00:48:24.022149 kernel: with arguments: May 17 00:48:24.022155 kernel: /init May 17 00:48:24.022162 kernel: with environment: May 17 00:48:24.022168 kernel: HOME=/ May 17 00:48:24.022175 kernel: TERM=linux May 17 00:48:24.022182 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:48:24.022190 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:48:24.022201 systemd[1]: Detected virtualization microsoft. May 17 00:48:24.022209 systemd[1]: Detected architecture arm64. May 17 00:48:24.022216 systemd[1]: Running in initrd. May 17 00:48:24.022223 systemd[1]: No hostname configured, using default hostname. May 17 00:48:24.022229 systemd[1]: Hostname set to . May 17 00:48:24.022237 systemd[1]: Initializing machine ID from random generator. May 17 00:48:24.022245 systemd[1]: Queued start job for default target initrd.target. May 17 00:48:24.022253 systemd[1]: Started systemd-ask-password-console.path. May 17 00:48:24.022260 systemd[1]: Reached target cryptsetup.target. May 17 00:48:24.022267 systemd[1]: Reached target paths.target. May 17 00:48:24.022274 systemd[1]: Reached target slices.target. May 17 00:48:24.022282 systemd[1]: Reached target swap.target. May 17 00:48:24.022290 systemd[1]: Reached target timers.target. May 17 00:48:24.022297 systemd[1]: Listening on iscsid.socket. May 17 00:48:24.022304 systemd[1]: Listening on iscsiuio.socket. May 17 00:48:24.022313 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:48:24.022320 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:48:24.022327 systemd[1]: Listening on systemd-journald.socket. May 17 00:48:24.022334 systemd[1]: Listening on systemd-networkd.socket. May 17 00:48:24.022342 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:48:24.022349 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:48:24.022356 systemd[1]: Reached target sockets.target. May 17 00:48:24.022363 systemd[1]: Starting kmod-static-nodes.service... May 17 00:48:24.022370 systemd[1]: Finished network-cleanup.service. May 17 00:48:24.022378 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:48:24.022386 systemd[1]: Starting systemd-journald.service... May 17 00:48:24.022393 systemd[1]: Starting systemd-modules-load.service... May 17 00:48:24.022400 systemd[1]: Starting systemd-resolved.service... May 17 00:48:24.022407 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:48:24.022418 systemd-journald[276]: Journal started May 17 00:48:24.022462 systemd-journald[276]: Runtime Journal (/run/log/journal/21c0686e005a4aef99a5a13ff4ab5016) is 8.0M, max 78.5M, 70.5M free. May 17 00:48:23.994162 systemd-modules-load[277]: Inserted module 'overlay' May 17 00:48:24.042857 systemd[1]: Started systemd-journald.service. May 17 00:48:24.042882 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:48:24.034805 systemd-resolved[278]: Positive Trust Anchors: May 17 00:48:24.034813 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:48:24.034840 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:48:24.159632 kernel: Bridge firewalling registered May 17 00:48:24.159657 kernel: audit: type=1130 audit(1747442904.063:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.159668 kernel: SCSI subsystem initialized May 17 00:48:24.159676 kernel: audit: type=1130 audit(1747442904.120:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.159685 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:48:24.159694 kernel: device-mapper: uevent: version 1.0.3 May 17 00:48:24.159704 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:48:24.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.037070 systemd-resolved[278]: Defaulting to hostname 'linux'. May 17 00:48:24.187765 kernel: audit: type=1130 audit(1747442904.163:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.059804 systemd-modules-load[277]: Inserted module 'br_netfilter' May 17 00:48:24.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.078450 systemd[1]: Started systemd-resolved.service. May 17 00:48:24.215435 kernel: audit: type=1130 audit(1747442904.187:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.135539 systemd[1]: Finished kmod-static-nodes.service. May 17 00:48:24.162235 systemd-modules-load[277]: Inserted module 'dm_multipath' May 17 00:48:24.248841 kernel: audit: type=1130 audit(1747442904.221:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.164181 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:48:24.276798 kernel: audit: type=1130 audit(1747442904.251:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.188190 systemd[1]: Finished systemd-modules-load.service. May 17 00:48:24.221568 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:48:24.252321 systemd[1]: Reached target nss-lookup.target. May 17 00:48:24.279407 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:48:24.291286 systemd[1]: Starting systemd-sysctl.service... May 17 00:48:24.301171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:48:24.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.319598 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:48:24.346227 kernel: audit: type=1130 audit(1747442904.324:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.339909 systemd[1]: Finished systemd-sysctl.service. May 17 00:48:24.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.369013 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:48:24.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.377522 systemd[1]: Starting dracut-cmdline.service... May 17 00:48:24.404508 kernel: audit: type=1130 audit(1747442904.350:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.404533 kernel: audit: type=1130 audit(1747442904.373:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.405107 dracut-cmdline[298]: dracut-dracut-053 May 17 00:48:24.409821 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:48:24.497738 kernel: Loading iSCSI transport class v2.0-870. May 17 00:48:24.513740 kernel: iscsi: registered transport (tcp) May 17 00:48:24.534364 kernel: iscsi: registered transport (qla4xxx) May 17 00:48:24.534386 kernel: QLogic iSCSI HBA Driver May 17 00:48:24.569476 systemd[1]: Finished dracut-cmdline.service. May 17 00:48:24.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:24.574819 systemd[1]: Starting dracut-pre-udev.service... May 17 00:48:24.630735 kernel: raid6: neonx8 gen() 13790 MB/s May 17 00:48:24.647723 kernel: raid6: neonx8 xor() 10817 MB/s May 17 00:48:24.667734 kernel: raid6: neonx4 gen() 13526 MB/s May 17 00:48:24.688724 kernel: raid6: neonx4 xor() 11052 MB/s May 17 00:48:24.708720 kernel: raid6: neonx2 gen() 12969 MB/s May 17 00:48:24.728724 kernel: raid6: neonx2 xor() 10556 MB/s May 17 00:48:24.749723 kernel: raid6: neonx1 gen() 10582 MB/s May 17 00:48:24.769721 kernel: raid6: neonx1 xor() 8802 MB/s May 17 00:48:24.789721 kernel: raid6: int64x8 gen() 6268 MB/s May 17 00:48:24.810722 kernel: raid6: int64x8 xor() 3540 MB/s May 17 00:48:24.830720 kernel: raid6: int64x4 gen() 7218 MB/s May 17 00:48:24.850721 kernel: raid6: int64x4 xor() 3848 MB/s May 17 00:48:24.871721 kernel: raid6: int64x2 gen() 6152 MB/s May 17 00:48:24.891724 kernel: raid6: int64x2 xor() 3323 MB/s May 17 00:48:24.911720 kernel: raid6: int64x1 gen() 5050 MB/s May 17 00:48:24.936708 kernel: raid6: int64x1 xor() 2645 MB/s May 17 00:48:24.936725 kernel: raid6: using algorithm neonx8 gen() 13790 MB/s May 17 00:48:24.936733 kernel: raid6: .... xor() 10817 MB/s, rmw enabled May 17 00:48:24.940831 kernel: raid6: using neon recovery algorithm May 17 00:48:24.961170 kernel: xor: measuring software checksum speed May 17 00:48:24.961182 kernel: 8regs : 17202 MB/sec May 17 00:48:24.964833 kernel: 32regs : 20697 MB/sec May 17 00:48:24.968419 kernel: arm64_neon : 27889 MB/sec May 17 00:48:24.968429 kernel: xor: using function: arm64_neon (27889 MB/sec) May 17 00:48:25.026728 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 17 00:48:25.036123 systemd[1]: Finished dracut-pre-udev.service. May 17 00:48:25.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.043000 audit: BPF prog-id=7 op=LOAD May 17 00:48:25.043000 audit: BPF prog-id=8 op=LOAD May 17 00:48:25.044510 systemd[1]: Starting systemd-udevd.service... May 17 00:48:25.061859 systemd-udevd[474]: Using default interface naming scheme 'v252'. May 17 00:48:25.066983 systemd[1]: Started systemd-udevd.service. May 17 00:48:25.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.077987 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:48:25.091265 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation May 17 00:48:25.117016 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:48:25.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.122373 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:48:25.158792 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:48:25.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.216729 kernel: hv_vmbus: Vmbus version:5.3 May 17 00:48:25.227734 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:48:25.227774 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:48:25.246868 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 17 00:48:25.246916 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:48:25.260721 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 17 00:48:25.260775 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:48:25.271166 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:48:25.280936 kernel: scsi host0: storvsc_host_t May 17 00:48:25.281110 kernel: scsi host1: storvsc_host_t May 17 00:48:25.281132 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:48:25.294443 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:48:25.315058 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:48:25.325780 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:48:25.325794 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:48:25.346663 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:48:25.346801 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:48:25.346904 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:48:25.346992 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:48:25.347077 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:48:25.347168 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:25.347180 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:48:25.372736 kernel: hv_netvsc 002248b9-8390-0022-48b9-8390002248b9 eth0: VF slot 1 added May 17 00:48:25.388437 kernel: hv_vmbus: registering driver hv_pci May 17 00:48:25.388490 kernel: hv_pci 77bf6c4f-e647-422a-a5dc-c6dcbb3e36ea: PCI VMBus probing: Using version 0x10004 May 17 00:48:25.492453 kernel: hv_pci 77bf6c4f-e647-422a-a5dc-c6dcbb3e36ea: PCI host bridge to bus e647:00 May 17 00:48:25.492552 kernel: pci_bus e647:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 17 00:48:25.492652 kernel: pci_bus e647:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:48:25.492746 kernel: pci e647:00:02.0: [15b3:1018] type 00 class 0x020000 May 17 00:48:25.492837 kernel: pci e647:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:48:25.492912 kernel: pci e647:00:02.0: enabling Extended Tags May 17 00:48:25.492989 kernel: pci e647:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e647:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 17 00:48:25.493067 kernel: pci_bus e647:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:48:25.493138 kernel: pci e647:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:48:25.530739 kernel: mlx5_core e647:00:02.0: firmware version: 16.30.1284 May 17 00:48:25.751920 kernel: mlx5_core e647:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) May 17 00:48:25.752066 kernel: hv_netvsc 002248b9-8390-0022-48b9-8390002248b9 eth0: VF registering: eth1 May 17 00:48:25.752151 kernel: mlx5_core e647:00:02.0 eth1: joined to eth0 May 17 00:48:25.760741 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (536) May 17 00:48:25.766737 kernel: mlx5_core e647:00:02.0 enP58951s1: renamed from eth1 May 17 00:48:25.774719 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:48:25.785165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:48:25.936389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:48:25.968810 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:48:25.974316 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:48:25.985851 systemd[1]: Starting disk-uuid.service... May 17 00:48:26.009742 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:26.016727 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:27.024738 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:27.024956 disk-uuid[604]: The operation has completed successfully. May 17 00:48:27.084869 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:48:27.088877 systemd[1]: Finished disk-uuid.service. May 17 00:48:27.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.094473 systemd[1]: Starting verity-setup.service... May 17 00:48:27.138755 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:48:27.390946 systemd[1]: Found device dev-mapper-usr.device. May 17 00:48:27.401122 systemd[1]: Mounting sysusr-usr.mount... May 17 00:48:27.404617 systemd[1]: Finished verity-setup.service. May 17 00:48:27.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.463762 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:48:27.463902 systemd[1]: Mounted sysusr-usr.mount. May 17 00:48:27.468127 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:48:27.468882 systemd[1]: Starting ignition-setup.service... May 17 00:48:27.477793 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:48:27.511067 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:27.511113 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:27.515573 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:27.575749 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:48:27.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.584000 audit: BPF prog-id=9 op=LOAD May 17 00:48:27.585099 systemd[1]: Starting systemd-networkd.service... May 17 00:48:27.606763 systemd-networkd[845]: lo: Link UP May 17 00:48:27.606779 systemd-networkd[845]: lo: Gained carrier May 17 00:48:27.607219 systemd-networkd[845]: Enumeration completed May 17 00:48:27.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.607342 systemd[1]: Started systemd-networkd.service. May 17 00:48:27.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.611125 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:48:27.651526 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:48:27.651526 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:48:27.651526 iscsid[854]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:48:27.651526 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:48:27.651526 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:48:27.651526 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:48:27.651526 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:48:27.733379 kernel: mlx5_core e647:00:02.0 enP58951s1: Link up May 17 00:48:27.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.619911 systemd[1]: Reached target network.target. May 17 00:48:27.754684 kernel: hv_netvsc 002248b9-8390-0022-48b9-8390002248b9 eth0: Data path switched to VF: enP58951s1 May 17 00:48:27.754851 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:48:27.625515 systemd[1]: Starting iscsiuio.service... May 17 00:48:27.633185 systemd[1]: Started iscsiuio.service. May 17 00:48:27.637539 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:48:27.638527 systemd[1]: Starting iscsid.service... May 17 00:48:27.651281 systemd[1]: Started iscsid.service. May 17 00:48:27.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.680083 systemd[1]: Starting dracut-initqueue.service... May 17 00:48:27.722103 systemd[1]: Finished dracut-initqueue.service. May 17 00:48:27.726853 systemd[1]: Reached target remote-fs-pre.target. May 17 00:48:27.737686 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:48:27.758944 systemd-networkd[845]: enP58951s1: Link UP May 17 00:48:27.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.759027 systemd-networkd[845]: eth0: Link UP May 17 00:48:27.759151 systemd-networkd[845]: eth0: Gained carrier May 17 00:48:27.759546 systemd[1]: Reached target remote-fs.target. May 17 00:48:27.768999 systemd[1]: Starting dracut-pre-mount.service... May 17 00:48:27.775107 systemd-networkd[845]: enP58951s1: Gained carrier May 17 00:48:27.785253 systemd[1]: Finished dracut-pre-mount.service. May 17 00:48:27.789826 systemd-networkd[845]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:48:27.810676 systemd[1]: Finished ignition-setup.service. May 17 00:48:27.817621 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:48:29.718817 systemd-networkd[845]: eth0: Gained IPv6LL May 17 00:48:30.490941 ignition[870]: Ignition 2.14.0 May 17 00:48:30.490955 ignition[870]: Stage: fetch-offline May 17 00:48:30.491021 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:30.491044 ignition[870]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:30.525645 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:30.525813 ignition[870]: parsed url from cmdline: "" May 17 00:48:30.525817 ignition[870]: no config URL provided May 17 00:48:30.525823 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:48:30.572983 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:48:30.573007 kernel: audit: type=1130 audit(1747442910.546:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.538524 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:48:30.525830 ignition[870]: no config at "/usr/lib/ignition/user.ign" May 17 00:48:30.547879 systemd[1]: Starting ignition-fetch.service... May 17 00:48:30.525835 ignition[870]: failed to fetch config: resource requires networking May 17 00:48:30.526043 ignition[870]: Ignition finished successfully May 17 00:48:30.565654 ignition[876]: Ignition 2.14.0 May 17 00:48:30.565660 ignition[876]: Stage: fetch May 17 00:48:30.565788 ignition[876]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:30.565807 ignition[876]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:30.571520 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:30.571673 ignition[876]: parsed url from cmdline: "" May 17 00:48:30.571677 ignition[876]: no config URL provided May 17 00:48:30.571691 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:48:30.571699 ignition[876]: no config at "/usr/lib/ignition/user.ign" May 17 00:48:30.571751 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:48:30.688901 ignition[876]: GET result: OK May 17 00:48:30.688964 ignition[876]: config has been read from IMDS userdata May 17 00:48:30.692213 unknown[876]: fetched base config from "system" May 17 00:48:30.689004 ignition[876]: parsing config with SHA512: 86b4c57f60355e184b06feec82c0f6c0bbbecf70db981ab1a6948054bb5e2b38f9aeda25b4d821ee882f27648c85ce4db9ef9f8a6b51d5eda9512cf8872982c5 May 17 00:48:30.692221 unknown[876]: fetched base config from "system" May 17 00:48:30.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.692803 ignition[876]: fetch: fetch complete May 17 00:48:30.733756 kernel: audit: type=1130 audit(1747442910.705:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.692226 unknown[876]: fetched user config from "azure" May 17 00:48:30.692808 ignition[876]: fetch: fetch passed May 17 00:48:30.697633 systemd[1]: Finished ignition-fetch.service. May 17 00:48:30.692849 ignition[876]: Ignition finished successfully May 17 00:48:30.724922 systemd[1]: Starting ignition-kargs.service... May 17 00:48:30.738404 ignition[883]: Ignition 2.14.0 May 17 00:48:30.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.738410 ignition[883]: Stage: kargs May 17 00:48:30.781400 kernel: audit: type=1130 audit(1747442910.757:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.753811 systemd[1]: Finished ignition-kargs.service. May 17 00:48:30.738511 ignition[883]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:30.758851 systemd[1]: Starting ignition-disks.service... May 17 00:48:30.738528 ignition[883]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:30.749068 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:30.750555 ignition[883]: kargs: kargs passed May 17 00:48:30.750604 ignition[883]: Ignition finished successfully May 17 00:48:30.804896 systemd[1]: Finished ignition-disks.service. May 17 00:48:30.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.766841 ignition[889]: Ignition 2.14.0 May 17 00:48:30.842723 kernel: audit: type=1130 audit(1747442910.812:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.766847 ignition[889]: Stage: disks May 17 00:48:30.830872 systemd[1]: Reached target initrd-root-device.target. May 17 00:48:30.766944 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:30.835537 systemd[1]: Reached target local-fs-pre.target. May 17 00:48:30.766963 ignition[889]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:30.839889 systemd[1]: Reached target local-fs.target. May 17 00:48:30.769505 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:30.846890 systemd[1]: Reached target sysinit.target. May 17 00:48:30.801819 ignition[889]: disks: disks passed May 17 00:48:30.853387 systemd[1]: Reached target basic.target. May 17 00:48:30.801883 ignition[889]: Ignition finished successfully May 17 00:48:30.862319 systemd[1]: Starting systemd-fsck-root.service... May 17 00:48:30.935381 systemd-fsck[897]: ROOT: clean, 619/7326000 files, 481078/7359488 blocks May 17 00:48:30.948202 systemd[1]: Finished systemd-fsck-root.service. May 17 00:48:30.973865 kernel: audit: type=1130 audit(1747442910.952:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.953389 systemd[1]: Mounting sysroot.mount... May 17 00:48:30.992750 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:48:30.992990 systemd[1]: Mounted sysroot.mount. May 17 00:48:30.999742 systemd[1]: Reached target initrd-root-fs.target. May 17 00:48:31.034698 systemd[1]: Mounting sysroot-usr.mount... May 17 00:48:31.039320 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:48:31.047133 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:48:31.047170 systemd[1]: Reached target ignition-diskful.target. May 17 00:48:31.053286 systemd[1]: Mounted sysroot-usr.mount. May 17 00:48:31.106772 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:48:31.111594 systemd[1]: Starting initrd-setup-root.service... May 17 00:48:31.138749 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (908) May 17 00:48:31.138793 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:31.138809 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:31.143386 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:48:31.153944 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:31.157116 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:48:31.170016 initrd-setup-root[939]: cut: /sysroot/etc/group: No such file or directory May 17 00:48:31.196014 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:48:31.205428 initrd-setup-root[955]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:48:31.763617 systemd[1]: Finished initrd-setup-root.service. May 17 00:48:31.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:31.768982 systemd[1]: Starting ignition-mount.service... May 17 00:48:31.799466 kernel: audit: type=1130 audit(1747442911.767:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:31.791391 systemd[1]: Starting sysroot-boot.service... May 17 00:48:31.795692 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:48:31.795844 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:48:31.821909 systemd[1]: Finished sysroot-boot.service. May 17 00:48:31.844246 kernel: audit: type=1130 audit(1747442911.825:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:31.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:31.851508 ignition[977]: INFO : Ignition 2.14.0 May 17 00:48:31.851508 ignition[977]: INFO : Stage: mount May 17 00:48:31.866951 ignition[977]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:31.866951 ignition[977]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:31.866951 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:31.866951 ignition[977]: INFO : mount: mount passed May 17 00:48:31.866951 ignition[977]: INFO : Ignition finished successfully May 17 00:48:31.919092 kernel: audit: type=1130 audit(1747442911.866:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:31.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:31.862969 systemd[1]: Finished ignition-mount.service. May 17 00:48:32.402903 coreos-metadata[907]: May 17 00:48:32.402 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:48:32.410824 coreos-metadata[907]: May 17 00:48:32.405 INFO Fetch successful May 17 00:48:32.439952 coreos-metadata[907]: May 17 00:48:32.439 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:48:32.463267 coreos-metadata[907]: May 17 00:48:32.463 INFO Fetch successful May 17 00:48:32.481556 coreos-metadata[907]: May 17 00:48:32.481 INFO wrote hostname ci-3510.3.7-n-ce3994935d to /sysroot/etc/hostname May 17 00:48:32.490092 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:48:32.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:32.496368 systemd[1]: Starting ignition-files.service... May 17 00:48:32.522581 kernel: audit: type=1130 audit(1747442912.495:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:32.524572 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:48:32.542960 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (986) May 17 00:48:32.554048 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:32.554066 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:32.554076 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:32.562808 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:48:32.578947 ignition[1005]: INFO : Ignition 2.14.0 May 17 00:48:32.578947 ignition[1005]: INFO : Stage: files May 17 00:48:32.587443 ignition[1005]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:32.587443 ignition[1005]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:32.587443 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:32.587443 ignition[1005]: DEBUG : files: compiled without relabeling support, skipping May 17 00:48:32.621353 ignition[1005]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:48:32.621353 ignition[1005]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:48:32.661009 ignition[1005]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:48:32.668404 ignition[1005]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:48:32.694133 unknown[1005]: wrote ssh authorized keys file for user: core May 17 00:48:32.700231 ignition[1005]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:48:32.700231 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:48:32.700231 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:48:32.700231 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:48:32.700231 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 00:48:32.778883 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:48:32.904010 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 00:48:32.914799 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:48:32.914799 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:48:33.425425 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 17 00:48:33.503735 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:48:33.503735 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:48:33.523502 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1968104675" May 17 00:48:33.661919 ignition[1005]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1968104675": device or resource busy May 17 00:48:33.661919 ignition[1005]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1968104675", trying btrfs: device or resource busy May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1968104675" May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1968104675" May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem1968104675" May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem1968104675" May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem99805456" May 17 00:48:33.661919 ignition[1005]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem99805456": device or resource busy May 17 00:48:33.661919 ignition[1005]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem99805456", trying btrfs: device or resource busy May 17 00:48:33.661919 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem99805456" May 17 00:48:33.536053 systemd[1]: mnt-oem1968104675.mount: Deactivated successfully. May 17 00:48:33.822433 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem99805456" May 17 00:48:33.822433 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem99805456" May 17 00:48:33.822433 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem99805456" May 17 00:48:33.822433 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:48:33.822433 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:33.822433 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 00:48:34.353221 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK May 17 00:48:34.646416 ignition[1005]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(15): [started] processing unit "waagent.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(15): [finished] processing unit "waagent.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(16): [started] processing unit "nvidia.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(16): [finished] processing unit "nvidia.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(17): [started] processing unit "containerd.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(17): [finished] processing unit "containerd.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(19): [started] processing unit "prepare-helm.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:48:34.658817 ignition[1005]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:48:34.658817 ignition[1005]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:48:34.658817 ignition[1005]: INFO : files: files passed May 17 00:48:34.658817 ignition[1005]: INFO : Ignition finished successfully May 17 00:48:34.904055 kernel: audit: type=1130 audit(1747442914.669:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.660858 systemd[1]: Finished ignition-files.service. May 17 00:48:34.692858 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:48:34.697719 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:48:34.930459 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:48:34.698577 systemd[1]: Starting ignition-quench.service... May 17 00:48:34.717173 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:48:34.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.717285 systemd[1]: Finished ignition-quench.service. May 17 00:48:34.753966 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:48:34.759377 systemd[1]: Reached target ignition-complete.target. May 17 00:48:34.771881 systemd[1]: Starting initrd-parse-etc.service... May 17 00:48:34.793110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:48:34.793208 systemd[1]: Finished initrd-parse-etc.service. May 17 00:48:34.807783 systemd[1]: Reached target initrd-fs.target. May 17 00:48:34.818408 systemd[1]: Reached target initrd.target. May 17 00:48:34.829419 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:48:34.830262 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:48:34.881672 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:48:34.896482 systemd[1]: Starting initrd-cleanup.service... May 17 00:48:35.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.916101 systemd[1]: Stopped target nss-lookup.target. May 17 00:48:34.920816 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:48:35.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.934991 systemd[1]: Stopped target timers.target. May 17 00:48:35.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.946870 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:48:35.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.946943 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:48:35.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.954928 systemd[1]: Stopped target initrd.target. May 17 00:48:34.963593 systemd[1]: Stopped target basic.target. May 17 00:48:35.125549 iscsid[854]: iscsid shutting down. May 17 00:48:35.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.971216 systemd[1]: Stopped target ignition-complete.target. May 17 00:48:35.142843 ignition[1043]: INFO : Ignition 2.14.0 May 17 00:48:35.142843 ignition[1043]: INFO : Stage: umount May 17 00:48:35.142843 ignition[1043]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:35.142843 ignition[1043]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:35.142843 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:35.142843 ignition[1043]: INFO : umount: umount passed May 17 00:48:35.142843 ignition[1043]: INFO : Ignition finished successfully May 17 00:48:35.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.980736 systemd[1]: Stopped target ignition-diskful.target. May 17 00:48:35.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.989943 systemd[1]: Stopped target initrd-root-device.target. May 17 00:48:35.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:34.998409 systemd[1]: Stopped target remote-fs.target. May 17 00:48:35.007367 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:48:35.015529 systemd[1]: Stopped target sysinit.target. May 17 00:48:35.023327 systemd[1]: Stopped target local-fs.target. May 17 00:48:35.030935 systemd[1]: Stopped target local-fs-pre.target. May 17 00:48:35.041682 systemd[1]: Stopped target swap.target. May 17 00:48:35.049346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:48:35.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.049417 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:48:35.057919 systemd[1]: Stopped target cryptsetup.target. May 17 00:48:35.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.065909 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:48:35.065959 systemd[1]: Stopped dracut-initqueue.service. May 17 00:48:35.073966 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:48:35.074009 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:48:35.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.083327 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:48:35.083363 systemd[1]: Stopped ignition-files.service. May 17 00:48:35.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.090890 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:48:35.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.090929 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:48:35.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.101496 systemd[1]: Stopping ignition-mount.service... May 17 00:48:35.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.108702 systemd[1]: Stopping iscsid.service... May 17 00:48:35.117359 systemd[1]: Stopping sysroot-boot.service... May 17 00:48:35.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.127284 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:48:35.423000 audit: BPF prog-id=6 op=UNLOAD May 17 00:48:35.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.127348 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:48:35.133950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:48:35.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.134005 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:48:35.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.155443 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:48:35.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.156081 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:48:35.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.156177 systemd[1]: Stopped iscsid.service. May 17 00:48:35.505095 kernel: hv_netvsc 002248b9-8390-0022-48b9-8390002248b9 eth0: Data path switched from VF: enP58951s1 May 17 00:48:35.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.167128 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:48:35.167198 systemd[1]: Finished initrd-cleanup.service. May 17 00:48:35.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.183704 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:48:35.183813 systemd[1]: Stopped ignition-mount.service. May 17 00:48:35.195153 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:48:35.195196 systemd[1]: Stopped ignition-disks.service. May 17 00:48:35.203156 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:48:35.203194 systemd[1]: Stopped ignition-kargs.service. May 17 00:48:35.207677 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:48:35.207719 systemd[1]: Stopped ignition-fetch.service. May 17 00:48:35.217800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:48:35.217843 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:48:35.226772 systemd[1]: Stopped target paths.target. May 17 00:48:35.236063 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:48:35.239738 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:48:35.244772 systemd[1]: Stopped target slices.target. May 17 00:48:35.258491 systemd[1]: Stopped target sockets.target. May 17 00:48:35.266413 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:48:35.266464 systemd[1]: Closed iscsid.socket. May 17 00:48:35.274117 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:48:35.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.274159 systemd[1]: Stopped ignition-setup.service. May 17 00:48:35.629228 kernel: kauditd_printk_skb: 41 callbacks suppressed May 17 00:48:35.629248 kernel: audit: type=1131 audit(1747442915.596:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:35.282543 systemd[1]: Stopping iscsiuio.service... May 17 00:48:35.292011 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:48:35.292128 systemd[1]: Stopped iscsiuio.service. May 17 00:48:35.299857 systemd[1]: Stopped target network.target. May 17 00:48:35.308092 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:48:35.308132 systemd[1]: Closed iscsiuio.socket. May 17 00:48:35.312324 systemd[1]: Stopping systemd-networkd.service... May 17 00:48:35.322745 systemd-networkd[845]: eth0: DHCPv6 lease lost May 17 00:48:35.672506 kernel: audit: type=1334 audit(1747442915.661:81): prog-id=9 op=UNLOAD May 17 00:48:35.661000 audit: BPF prog-id=9 op=UNLOAD May 17 00:48:35.323883 systemd[1]: Stopping systemd-resolved.service... May 17 00:48:35.331704 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:48:35.676000 audit: BPF prog-id=8 op=UNLOAD May 17 00:48:35.331822 systemd[1]: Stopped systemd-networkd.service. May 17 00:48:35.686000 audit: BPF prog-id=7 op=UNLOAD May 17 00:48:35.342379 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:48:35.711345 kernel: audit: type=1334 audit(1747442915.676:82): prog-id=8 op=UNLOAD May 17 00:48:35.711368 kernel: audit: type=1334 audit(1747442915.686:83): prog-id=7 op=UNLOAD May 17 00:48:35.711377 kernel: audit: type=1334 audit(1747442915.686:84): prog-id=5 op=UNLOAD May 17 00:48:35.711392 kernel: audit: type=1334 audit(1747442915.686:85): prog-id=4 op=UNLOAD May 17 00:48:35.686000 audit: BPF prog-id=5 op=UNLOAD May 17 00:48:35.686000 audit: BPF prog-id=4 op=UNLOAD May 17 00:48:35.342454 systemd[1]: Stopped sysroot-boot.service. May 17 00:48:35.720651 kernel: audit: type=1334 audit(1747442915.686:86): prog-id=3 op=UNLOAD May 17 00:48:35.686000 audit: BPF prog-id=3 op=UNLOAD May 17 00:48:35.346669 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:48:35.346700 systemd[1]: Closed systemd-networkd.socket. May 17 00:48:35.736461 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). May 17 00:48:35.355471 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:48:35.355517 systemd[1]: Stopped initrd-setup-root.service. May 17 00:48:35.366960 systemd[1]: Stopping network-cleanup.service... May 17 00:48:35.374300 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:48:35.374364 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:48:35.379353 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:48:35.379406 systemd[1]: Stopped systemd-sysctl.service. May 17 00:48:35.393338 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:48:35.393382 systemd[1]: Stopped systemd-modules-load.service. May 17 00:48:35.399323 systemd[1]: Stopping systemd-udevd.service... May 17 00:48:35.408740 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:48:35.409246 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:48:35.409380 systemd[1]: Stopped systemd-resolved.service. May 17 00:48:35.415848 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:48:35.415984 systemd[1]: Stopped systemd-udevd.service. May 17 00:48:35.424582 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:48:35.424640 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:48:35.433924 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:48:35.433958 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:48:35.438610 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:48:35.438660 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:48:35.447159 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:48:35.447203 systemd[1]: Stopped dracut-cmdline.service. May 17 00:48:35.456600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:48:35.456641 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:48:35.466094 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:48:35.474346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:48:35.474406 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:48:35.479793 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:48:35.479835 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:48:35.484607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:48:35.484661 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:48:35.505830 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:48:35.506310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:48:35.506396 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:48:35.589788 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:48:35.589903 systemd[1]: Stopped network-cleanup.service. May 17 00:48:35.596849 systemd[1]: Reached target initrd-switch-root.target. May 17 00:48:35.629163 systemd[1]: Starting initrd-switch-root.service... May 17 00:48:35.674264 systemd[1]: Switching root. May 17 00:48:35.737490 systemd-journald[276]: Journal stopped May 17 00:48:58.731298 kernel: audit: type=1335 audit(1747442915.737:87): pid=276 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe=2F7573722F6C69622F73797374656D642F73797374656D642D6A6F75726E616C64202864656C6574656429 nl-mcgrp=1 op=disconnect res=1 May 17 00:48:58.731318 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:48:58.731329 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:48:58.731340 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:48:58.731348 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:48:58.731356 kernel: SELinux: policy capability open_perms=1 May 17 00:48:58.731365 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:48:58.731374 kernel: SELinux: policy capability always_check_network=0 May 17 00:48:58.731382 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:48:58.731390 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:48:58.731400 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:48:58.731408 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:48:58.731416 kernel: audit: type=1403 audit(1747442920.954:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:48:58.731426 systemd[1]: Successfully loaded SELinux policy in 302.964ms. May 17 00:48:58.731437 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.306ms. May 17 00:48:58.731449 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:48:58.731458 systemd[1]: Detected virtualization microsoft. May 17 00:48:58.731467 systemd[1]: Detected architecture arm64. May 17 00:48:58.731476 systemd[1]: Detected first boot. May 17 00:48:58.731485 systemd[1]: Hostname set to . May 17 00:48:58.731495 systemd[1]: Initializing machine ID from random generator. May 17 00:48:58.731504 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:48:58.731515 kernel: audit: type=1400 audit(1747442923.571:89): avc: denied { associate } for pid=1093 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:48:58.731525 kernel: audit: type=1300 audit(1747442923.571:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=400002221c a1=40000282b8 a2=4000026440 a3=32 items=0 ppid=1076 pid=1093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:58.731534 kernel: audit: type=1327 audit(1747442923.571:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:48:58.731544 kernel: audit: type=1400 audit(1747442923.580:90): avc: denied { associate } for pid=1093 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:48:58.731554 kernel: audit: type=1300 audit(1747442923.580:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000222f5 a2=1ed a3=0 items=2 ppid=1076 pid=1093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:58.731564 kernel: audit: type=1307 audit(1747442923.580:90): cwd="/" May 17 00:48:58.731573 kernel: audit: type=1302 audit(1747442923.580:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:58.731583 kernel: audit: type=1302 audit(1747442923.580:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:58.731592 kernel: audit: type=1327 audit(1747442923.580:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:48:58.731602 systemd[1]: Populated /etc with preset unit settings. May 17 00:48:58.731611 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:48:58.731621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:48:58.731633 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:48:58.731642 systemd[1]: Queued start job for default target multi-user.target. May 17 00:48:58.731653 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:48:58.731663 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:48:58.731672 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:48:58.731682 systemd[1]: Created slice system-getty.slice. May 17 00:48:58.731693 systemd[1]: Created slice system-modprobe.slice. May 17 00:48:58.731704 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:48:58.731725 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:48:58.731736 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:48:58.731745 systemd[1]: Created slice user.slice. May 17 00:48:58.731755 systemd[1]: Started systemd-ask-password-console.path. May 17 00:48:58.731764 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:48:58.731773 systemd[1]: Set up automount boot.automount. May 17 00:48:58.731783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:48:58.731792 systemd[1]: Reached target integritysetup.target. May 17 00:48:58.731803 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:48:58.731812 systemd[1]: Reached target remote-fs.target. May 17 00:48:58.731821 systemd[1]: Reached target slices.target. May 17 00:48:58.731831 systemd[1]: Reached target swap.target. May 17 00:48:58.731841 systemd[1]: Reached target torcx.target. May 17 00:48:58.731851 systemd[1]: Reached target veritysetup.target. May 17 00:48:58.731860 systemd[1]: Listening on systemd-coredump.socket. May 17 00:48:58.731870 systemd[1]: Listening on systemd-initctl.socket. May 17 00:48:58.731881 kernel: audit: type=1400 audit(1747442938.333:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:48:58.731890 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:48:58.731900 kernel: audit: type=1335 audit(1747442938.333:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:48:58.731909 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:48:58.731918 systemd[1]: Listening on systemd-journald.socket. May 17 00:48:58.731928 systemd[1]: Listening on systemd-networkd.socket. May 17 00:48:58.731937 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:48:58.731948 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:48:58.731957 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:48:58.731967 systemd[1]: Mounting dev-hugepages.mount... May 17 00:48:58.731977 systemd[1]: Mounting dev-mqueue.mount... May 17 00:48:58.731986 systemd[1]: Mounting media.mount... May 17 00:48:58.731997 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:48:58.732006 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:48:58.732016 systemd[1]: Mounting tmp.mount... May 17 00:48:58.732026 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:48:58.732036 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:48:58.732046 systemd[1]: Starting kmod-static-nodes.service... May 17 00:48:58.732055 systemd[1]: Starting modprobe@configfs.service... May 17 00:48:58.732065 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:48:58.732075 systemd[1]: Starting modprobe@drm.service... May 17 00:48:58.732086 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:48:58.732096 systemd[1]: Starting modprobe@fuse.service... May 17 00:48:58.732106 systemd[1]: Starting modprobe@loop.service... May 17 00:48:58.732115 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:48:58.732125 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:48:58.732135 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 00:48:58.732144 systemd[1]: Starting systemd-journald.service... May 17 00:48:58.732153 systemd[1]: Starting systemd-modules-load.service... May 17 00:48:58.732163 systemd[1]: Starting systemd-network-generator.service... May 17 00:48:58.732174 systemd[1]: Starting systemd-remount-fs.service... May 17 00:48:58.732183 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:48:58.732193 systemd[1]: Mounted dev-hugepages.mount. May 17 00:48:58.732202 systemd[1]: Mounted dev-mqueue.mount. May 17 00:48:58.732212 systemd[1]: Mounted media.mount. May 17 00:48:58.732221 kernel: audit: type=1305 audit(1747442938.728:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:48:58.732234 systemd-journald[1217]: Journal started May 17 00:48:58.732273 systemd-journald[1217]: Runtime Journal (/run/log/journal/e0bb566f54f84268beecdd55b1dc3425) is 8.0M, max 78.5M, 70.5M free. May 17 00:48:58.333000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 00:48:58.728000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:48:58.744989 systemd[1]: Started systemd-journald.service. May 17 00:48:58.745044 kernel: audit: type=1300 audit(1747442938.728:93): arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc40d9d00 a2=4000 a3=1 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:58.728000 audit[1217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc40d9d00 a2=4000 a3=1 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:58.776184 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:48:58.781118 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:48:58.785942 systemd[1]: Mounted tmp.mount. May 17 00:48:58.728000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:48:58.793992 kernel: fuse: init (API version 7.34) May 17 00:48:58.794043 kernel: audit: type=1327 audit(1747442938.728:93): proctitle="/usr/lib/systemd/systemd-journald" May 17 00:48:58.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.820810 kernel: audit: type=1130 audit(1747442938.774:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.803974 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:48:58.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.826492 systemd[1]: Finished kmod-static-nodes.service. May 17 00:48:58.851734 kernel: audit: type=1130 audit(1747442938.825:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.851785 kernel: loop: module loaded May 17 00:48:58.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.853453 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:48:58.853622 systemd[1]: Finished modprobe@configfs.service. May 17 00:48:58.876189 kernel: audit: type=1130 audit(1747442938.852:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.882377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:48:58.882554 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:48:58.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.921625 kernel: audit: type=1130 audit(1747442938.881:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.921663 kernel: audit: type=1131 audit(1747442938.881:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.922312 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:48:58.922483 systemd[1]: Finished modprobe@drm.service. May 17 00:48:58.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.927462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:48:58.927623 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:48:58.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.932883 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:48:58.933032 systemd[1]: Finished modprobe@fuse.service. May 17 00:48:58.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.937852 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:48:58.938013 systemd[1]: Finished modprobe@loop.service. May 17 00:48:58.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.943636 systemd[1]: Finished systemd-modules-load.service. May 17 00:48:58.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.949317 systemd[1]: Finished systemd-network-generator.service. May 17 00:48:58.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.954856 systemd[1]: Finished systemd-remount-fs.service. May 17 00:48:58.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.960600 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:48:58.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:58.966416 systemd[1]: Reached target network-pre.target. May 17 00:48:58.972790 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:48:58.982325 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:48:58.986694 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:48:59.019875 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:48:59.025613 systemd[1]: Starting systemd-journal-flush.service... May 17 00:48:59.030076 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:48:59.031290 systemd[1]: Starting systemd-random-seed.service... May 17 00:48:59.035948 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:48:59.037219 systemd[1]: Starting systemd-sysctl.service... May 17 00:48:59.042452 systemd[1]: Starting systemd-sysusers.service... May 17 00:48:59.047744 systemd[1]: Starting systemd-udev-settle.service... May 17 00:48:59.054762 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:48:59.060233 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:48:59.066543 udevadm[1246]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:48:59.081015 systemd[1]: Finished systemd-random-seed.service. May 17 00:48:59.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:59.086231 systemd[1]: Reached target first-boot-complete.target. May 17 00:48:59.116785 systemd[1]: Finished systemd-sysctl.service. May 17 00:48:59.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:59.141331 systemd-journald[1217]: Time spent on flushing to /var/log/journal/e0bb566f54f84268beecdd55b1dc3425 is 12.892ms for 1046 entries. May 17 00:48:59.141331 systemd-journald[1217]: System Journal (/var/log/journal/e0bb566f54f84268beecdd55b1dc3425) is 8.0M, max 2.6G, 2.6G free. May 17 00:48:59.286445 systemd-journald[1217]: Received client request to flush runtime journal. May 17 00:48:59.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:59.287433 systemd[1]: Finished systemd-journal-flush.service. May 17 00:48:59.811887 systemd[1]: Finished systemd-sysusers.service. May 17 00:48:59.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:59.818224 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:49:00.363136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:49:00.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:01.069618 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:49:01.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:01.076049 systemd[1]: Starting systemd-udevd.service... May 17 00:49:01.094186 systemd-udevd[1257]: Using default interface naming scheme 'v252'. May 17 00:49:01.377697 systemd[1]: Started systemd-udevd.service. May 17 00:49:01.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:01.389068 systemd[1]: Starting systemd-networkd.service... May 17 00:49:01.416586 systemd[1]: Found device dev-ttyAMA0.device. May 17 00:49:01.472899 systemd[1]: Starting systemd-userdbd.service... May 17 00:49:01.493741 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:49:01.492000 audit[1258]: AVC avc: denied { confidentiality } for pid=1258 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:49:01.498811 kernel: hv_vmbus: registering driver hv_balloon May 17 00:49:01.508743 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:49:01.508851 kernel: hv_balloon: Memory hot add disabled on ARM64 May 17 00:49:01.511731 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:49:01.520327 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:49:01.521729 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:49:01.533042 kernel: Console: switching to colour dummy device 80x25 May 17 00:49:01.535736 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:49:01.492000 audit[1258]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadb7b8fb0 a1=aa2c a2=ffff894b24b0 a3=aaaadb71a010 items=12 ppid=1257 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:01.492000 audit: CWD cwd="/" May 17 00:49:01.492000 audit: PATH item=0 name=(null) inode=5836 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=1 name=(null) inode=10020 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=2 name=(null) inode=10020 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=3 name=(null) inode=10021 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=4 name=(null) inode=10020 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=5 name=(null) inode=10022 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=6 name=(null) inode=10020 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=7 name=(null) inode=10023 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=8 name=(null) inode=10020 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=9 name=(null) inode=10024 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=10 name=(null) inode=10020 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PATH item=11 name=(null) inode=10025 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:49:01.492000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:49:01.550701 systemd[1]: Started systemd-userdbd.service. May 17 00:49:01.559969 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:49:01.560021 kernel: hv_vmbus: registering driver hv_utils May 17 00:49:01.560774 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:49:01.570538 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:49:01.570609 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:49:01.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:01.813250 systemd[1]: Finished systemd-udev-settle.service. May 17 00:49:01.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:01.822634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:49:01.829492 systemd[1]: Starting lvm2-activation-early.service... May 17 00:49:02.181732 systemd-networkd[1278]: lo: Link UP May 17 00:49:02.181745 systemd-networkd[1278]: lo: Gained carrier May 17 00:49:02.182128 systemd-networkd[1278]: Enumeration completed May 17 00:49:02.182286 systemd[1]: Started systemd-networkd.service. May 17 00:49:02.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:02.188183 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:49:02.209903 systemd-networkd[1278]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:49:02.256650 kernel: mlx5_core e647:00:02.0 enP58951s1: Link up May 17 00:49:02.284672 kernel: hv_netvsc 002248b9-8390-0022-48b9-8390002248b9 eth0: Data path switched to VF: enP58951s1 May 17 00:49:02.284942 systemd-networkd[1278]: enP58951s1: Link UP May 17 00:49:02.285063 systemd-networkd[1278]: eth0: Link UP May 17 00:49:02.285072 systemd-networkd[1278]: eth0: Gained carrier May 17 00:49:02.289911 systemd-networkd[1278]: enP58951s1: Gained carrier May 17 00:49:02.300739 systemd-networkd[1278]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:49:02.349652 lvm[1333]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:49:02.374556 systemd[1]: Finished lvm2-activation-early.service. May 17 00:49:02.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:02.380176 systemd[1]: Reached target cryptsetup.target. May 17 00:49:02.385942 systemd[1]: Starting lvm2-activation.service... May 17 00:49:02.390268 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:49:02.409531 systemd[1]: Finished lvm2-activation.service. May 17 00:49:02.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:02.414747 systemd[1]: Reached target local-fs-pre.target. May 17 00:49:02.419668 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:49:02.419779 systemd[1]: Reached target local-fs.target. May 17 00:49:02.424162 systemd[1]: Reached target machines.target. May 17 00:49:02.430024 systemd[1]: Starting ldconfig.service... May 17 00:49:02.433989 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:02.434139 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:02.435460 systemd[1]: Starting systemd-boot-update.service... May 17 00:49:02.441050 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:49:02.447933 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:49:02.453935 systemd[1]: Starting systemd-sysext.service... May 17 00:49:02.546894 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:49:02.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:02.733412 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1340 (bootctl) May 17 00:49:02.734655 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:49:03.085288 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:49:03.090426 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:49:03.090666 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:49:03.151908 kernel: loop0: detected capacity change from 0 to 203944 May 17 00:49:03.195659 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:49:03.213699 kernel: loop1: detected capacity change from 0 to 203944 May 17 00:49:03.218056 (sd-sysext)[1356]: Using extensions 'kubernetes'. May 17 00:49:03.218451 (sd-sysext)[1356]: Merged extensions into '/usr'. May 17 00:49:03.236280 systemd[1]: Mounting usr-share-oem.mount... May 17 00:49:03.240696 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:03.241987 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:03.248410 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:03.254101 systemd[1]: Starting modprobe@loop.service... May 17 00:49:03.258392 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:03.258612 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:03.259594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:03.259886 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:03.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.266078 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:03.266232 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:03.268501 kernel: kauditd_printk_skb: 43 callbacks suppressed May 17 00:49:03.268561 kernel: audit: type=1130 audit(1747442943.264:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.300771 kernel: audit: type=1131 audit(1747442943.264:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.306121 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:03.361941 kernel: audit: type=1130 audit(1747442943.305:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.362014 kernel: audit: type=1131 audit(1747442943.305:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.362037 kernel: audit: type=1130 audit(1747442943.344:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.306308 systemd[1]: Finished modprobe@loop.service. May 17 00:49:03.345377 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:49:03.345479 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:49:03.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.378913 kernel: audit: type=1131 audit(1747442943.344:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.386022 systemd[1]: Mounted usr-share-oem.mount. May 17 00:49:03.391223 systemd[1]: Finished systemd-sysext.service. May 17 00:49:03.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.417946 kernel: audit: type=1130 audit(1747442943.395:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.397142 systemd[1]: Starting ensure-sysext.service... May 17 00:49:03.418589 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:49:03.426702 systemd[1]: Reloading. May 17 00:49:03.436420 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:49:03.456976 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:49:03.479716 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:49:03.490432 /usr/lib/systemd/system-generators/torcx-generator[1391]: time="2025-05-17T00:49:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:49:03.495841 /usr/lib/systemd/system-generators/torcx-generator[1391]: time="2025-05-17T00:49:03Z" level=info msg="torcx already run" May 17 00:49:03.506976 systemd-fsck[1349]: fsck.fat 4.2 (2021-01-31) May 17 00:49:03.506976 systemd-fsck[1349]: /dev/sda1: 236 files, 117182/258078 clusters May 17 00:49:03.509753 systemd-networkd[1278]: eth0: Gained IPv6LL May 17 00:49:03.573266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:49:03.573485 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:49:03.590403 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:49:03.645957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:49:03.661994 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:49:03.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.668176 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:49:03.691537 kernel: audit: type=1130 audit(1747442943.666:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.692564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:49:03.710833 kernel: audit: type=1130 audit(1747442943.691:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.722658 systemd[1]: Mounting boot.mount... May 17 00:49:03.744906 kernel: audit: type=1130 audit(1747442943.716:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.748647 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:03.749930 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:03.757306 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:03.765544 systemd[1]: Starting modprobe@loop.service... May 17 00:49:03.769976 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:03.770114 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:03.772582 systemd[1]: Mounted boot.mount. May 17 00:49:03.779227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:03.779410 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:03.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.784434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:03.784693 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:03.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.789890 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:03.790102 systemd[1]: Finished modprobe@loop.service. May 17 00:49:03.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.795030 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:49:03.795125 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:49:03.796741 systemd[1]: Finished systemd-boot-update.service. May 17 00:49:03.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.802168 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:03.803646 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:03.808758 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:03.813972 systemd[1]: Starting modprobe@loop.service... May 17 00:49:03.818028 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:03.818177 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:03.818968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:03.819135 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:03.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.824075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:03.824228 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:03.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.830063 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:03.830281 systemd[1]: Finished modprobe@loop.service. May 17 00:49:03.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.838396 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:49:03.839720 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:49:03.845468 systemd[1]: Starting modprobe@drm.service... May 17 00:49:03.850724 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:49:03.856129 systemd[1]: Starting modprobe@loop.service... May 17 00:49:03.860109 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:49:03.860244 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:03.861144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:49:03.861314 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:49:03.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.866534 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:49:03.866712 systemd[1]: Finished modprobe@drm.service. May 17 00:49:03.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.871736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:49:03.871896 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:49:03.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.877047 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:49:03.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.877280 systemd[1]: Finished modprobe@loop.service. May 17 00:49:03.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:03.882490 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:49:03.882578 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:49:03.883793 systemd[1]: Finished ensure-sysext.service. May 17 00:49:03.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:04.383596 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:49:04.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:04.390276 systemd[1]: Starting audit-rules.service... May 17 00:49:04.395265 systemd[1]: Starting clean-ca-certificates.service... May 17 00:49:04.400719 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:49:04.407213 systemd[1]: Starting systemd-resolved.service... May 17 00:49:04.412823 systemd[1]: Starting systemd-timesyncd.service... May 17 00:49:04.418119 systemd[1]: Starting systemd-update-utmp.service... May 17 00:49:04.422883 systemd[1]: Finished clean-ca-certificates.service. May 17 00:49:04.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:04.428139 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:49:04.453000 audit[1498]: SYSTEM_BOOT pid=1498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:49:04.457048 systemd[1]: Finished systemd-update-utmp.service. May 17 00:49:04.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:04.504254 systemd[1]: Started systemd-timesyncd.service. May 17 00:49:04.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:04.509107 systemd[1]: Reached target time-set.target. May 17 00:49:04.618745 systemd-resolved[1495]: Positive Trust Anchors: May 17 00:49:04.619089 systemd-resolved[1495]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:49:04.619166 systemd-resolved[1495]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:49:04.636338 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:49:04.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:49:04.657000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:49:04.657000 audit[1514]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffffd49a30 a2=420 a3=0 items=0 ppid=1491 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:49:04.657000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:49:04.658678 augenrules[1514]: No rules May 17 00:49:04.659515 systemd[1]: Finished audit-rules.service. May 17 00:49:04.754922 systemd-resolved[1495]: Using system hostname 'ci-3510.3.7-n-ce3994935d'. May 17 00:49:04.756509 systemd[1]: Started systemd-resolved.service. May 17 00:49:04.761465 systemd[1]: Reached target network.target. May 17 00:49:04.765972 systemd[1]: Reached target network-online.target. May 17 00:49:04.770845 systemd[1]: Reached target nss-lookup.target. May 17 00:49:13.021387 ldconfig[1339]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:49:13.031591 systemd[1]: Finished ldconfig.service. May 17 00:49:13.037962 systemd[1]: Starting systemd-update-done.service... May 17 00:49:13.164437 systemd[1]: Finished systemd-update-done.service. May 17 00:49:13.170040 systemd[1]: Reached target sysinit.target. May 17 00:49:13.174354 systemd[1]: Started motdgen.path. May 17 00:49:13.178163 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:49:13.184340 systemd[1]: Started logrotate.timer. May 17 00:49:13.188225 systemd[1]: Started mdadm.timer. May 17 00:49:13.191877 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:49:13.196420 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:49:13.196455 systemd[1]: Reached target paths.target. May 17 00:49:13.200808 systemd[1]: Reached target timers.target. May 17 00:49:13.205438 systemd[1]: Listening on dbus.socket. May 17 00:49:13.210777 systemd[1]: Starting docker.socket... May 17 00:49:13.245307 systemd[1]: Listening on sshd.socket. May 17 00:49:13.249310 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:13.249766 systemd[1]: Listening on docker.socket. May 17 00:49:13.253784 systemd[1]: Reached target sockets.target. May 17 00:49:13.257867 systemd[1]: Reached target basic.target. May 17 00:49:13.262386 systemd[1]: System is tainted: cgroupsv1 May 17 00:49:13.262437 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:49:13.262456 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:49:13.263659 systemd[1]: Starting containerd.service... May 17 00:49:13.268422 systemd[1]: Starting dbus.service... May 17 00:49:13.272878 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:49:13.278727 systemd[1]: Starting extend-filesystems.service... May 17 00:49:13.282910 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:49:13.284520 systemd[1]: Starting kubelet.service... May 17 00:49:13.289152 systemd[1]: Starting motdgen.service... May 17 00:49:13.293950 systemd[1]: Started nvidia.service. May 17 00:49:13.299186 systemd[1]: Starting prepare-helm.service... May 17 00:49:13.304259 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:49:13.309877 systemd[1]: Starting sshd-keygen.service... May 17 00:49:13.315608 systemd[1]: Starting systemd-logind.service... May 17 00:49:13.319649 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:49:13.319733 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:49:13.322826 systemd[1]: Starting update-engine.service... May 17 00:49:13.330422 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:49:13.338435 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:49:13.339801 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:49:13.365735 jq[1550]: true May 17 00:49:13.370381 jq[1529]: false May 17 00:49:13.372460 extend-filesystems[1530]: Found loop1 May 17 00:49:13.378731 extend-filesystems[1530]: Found sda May 17 00:49:13.378731 extend-filesystems[1530]: Found sda1 May 17 00:49:13.378731 extend-filesystems[1530]: Found sda2 May 17 00:49:13.378731 extend-filesystems[1530]: Found sda3 May 17 00:49:13.378731 extend-filesystems[1530]: Found usr May 17 00:49:13.378731 extend-filesystems[1530]: Found sda4 May 17 00:49:13.378731 extend-filesystems[1530]: Found sda6 May 17 00:49:13.378731 extend-filesystems[1530]: Found sda7 May 17 00:49:13.378731 extend-filesystems[1530]: Found sda9 May 17 00:49:13.378731 extend-filesystems[1530]: Checking size of /dev/sda9 May 17 00:49:13.443726 env[1557]: time="2025-05-17T00:49:13.435449840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:49:13.384196 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:49:13.444092 jq[1564]: true May 17 00:49:13.384453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:49:13.385586 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:49:13.385904 systemd[1]: Finished motdgen.service. May 17 00:49:13.474976 tar[1553]: linux-arm64/helm May 17 00:49:13.491488 env[1557]: time="2025-05-17T00:49:13.491438040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:49:13.493373 env[1557]: time="2025-05-17T00:49:13.493336720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:13.498450 env[1557]: time="2025-05-17T00:49:13.498380880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:49:13.498579 env[1557]: time="2025-05-17T00:49:13.498563400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:13.498941 env[1557]: time="2025-05-17T00:49:13.498918880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:49:13.499038 env[1557]: time="2025-05-17T00:49:13.499023720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:49:13.499103 env[1557]: time="2025-05-17T00:49:13.499089720Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:49:13.499157 env[1557]: time="2025-05-17T00:49:13.499144720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:49:13.499282 env[1557]: time="2025-05-17T00:49:13.499266240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:13.499564 env[1557]: time="2025-05-17T00:49:13.499546160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:49:13.501036 env[1557]: time="2025-05-17T00:49:13.501010880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:49:13.501137 env[1557]: time="2025-05-17T00:49:13.501122480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:49:13.501265 env[1557]: time="2025-05-17T00:49:13.501248680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:49:13.501334 env[1557]: time="2025-05-17T00:49:13.501321240Z" level=info msg="metadata content store policy set" policy=shared May 17 00:49:13.523991 extend-filesystems[1530]: Old size kept for /dev/sda9 May 17 00:49:13.523991 extend-filesystems[1530]: Found sr0 May 17 00:49:13.518584 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530754880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530797960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530812080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530858640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530875640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530889520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.530901240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531376640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531398680Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531412000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531424840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531439040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531758040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:49:13.560581 env[1557]: time="2025-05-17T00:49:13.531831320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:49:13.518867 systemd[1]: Finished extend-filesystems.service. May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532159640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532199640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532214600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532265280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532278720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532291200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532304920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532373480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532388680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532400840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532412320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532426800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532541960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532557080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:49:13.566851 env[1557]: time="2025-05-17T00:49:13.532570360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:49:13.537963 systemd[1]: Started containerd.service. May 17 00:49:13.568447 env[1557]: time="2025-05-17T00:49:13.532584360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:49:13.568447 env[1557]: time="2025-05-17T00:49:13.532599600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:49:13.568447 env[1557]: time="2025-05-17T00:49:13.532610240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:49:13.568447 env[1557]: time="2025-05-17T00:49:13.532639960Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:49:13.568447 env[1557]: time="2025-05-17T00:49:13.532675640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.532866680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.532920960Z" level=info msg="Connect containerd service" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.532951280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533503440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533600000Z" level=info msg="Start subscribing containerd event" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533662480Z" level=info msg="Start recovering state" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533776200Z" level=info msg="Start event monitor" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533800600Z" level=info msg="Start snapshots syncer" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533811360Z" level=info msg="Start cni network conf syncer for default" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.533820120Z" level=info msg="Start streaming server" May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.537725560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.537786160Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:49:13.568553 env[1557]: time="2025-05-17T00:49:13.538110360Z" level=info msg="containerd successfully booted in 0.103441s" May 17 00:49:13.615415 bash[1592]: Updated "/home/core/.ssh/authorized_keys" May 17 00:49:13.616337 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:49:13.689803 systemd-logind[1545]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 17 00:49:13.690419 systemd-logind[1545]: New seat seat0. May 17 00:49:13.710122 dbus-daemon[1528]: [system] SELinux support is enabled May 17 00:49:13.710326 systemd[1]: Started dbus.service. May 17 00:49:13.716050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:49:13.716071 systemd[1]: Reached target system-config.target. May 17 00:49:13.723759 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:49:13.723780 systemd[1]: Reached target user-config.target. May 17 00:49:13.732184 systemd[1]: Started systemd-logind.service. May 17 00:49:13.740124 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:49:14.127319 tar[1553]: linux-arm64/LICENSE May 17 00:49:14.127510 tar[1553]: linux-arm64/README.md May 17 00:49:14.134153 systemd[1]: Finished prepare-helm.service. May 17 00:49:14.333220 systemd[1]: Started kubelet.service. May 17 00:49:14.377930 update_engine[1548]: I0517 00:49:14.363384 1548 main.cc:92] Flatcar Update Engine starting May 17 00:49:14.425316 systemd[1]: Started update-engine.service. May 17 00:49:14.425656 update_engine[1548]: I0517 00:49:14.425364 1548 update_check_scheduler.cc:74] Next update check in 10m20s May 17 00:49:14.433563 systemd[1]: Started locksmithd.service. May 17 00:49:14.709019 kubelet[1657]: E0517 00:49:14.708966 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:14.710954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:14.711103 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:16.250727 locksmithd[1664]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:49:16.730870 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:49:16.748897 systemd[1]: Finished sshd-keygen.service. May 17 00:49:16.755590 systemd[1]: Starting issuegen.service... May 17 00:49:16.760734 systemd[1]: Started waagent.service. May 17 00:49:16.765422 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:49:16.765704 systemd[1]: Finished issuegen.service. May 17 00:49:16.771440 systemd[1]: Starting systemd-user-sessions.service... May 17 00:49:16.806210 systemd[1]: Finished systemd-user-sessions.service. May 17 00:49:16.813255 systemd[1]: Started getty@tty1.service. May 17 00:49:16.819468 systemd[1]: Started serial-getty@ttyAMA0.service. May 17 00:49:16.824510 systemd[1]: Reached target getty.target. May 17 00:49:16.831793 systemd[1]: Reached target multi-user.target. May 17 00:49:16.837771 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:49:16.850723 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:49:16.850981 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:49:16.859914 systemd[1]: Startup finished in 17.459s (kernel) + 36.625s (userspace) = 54.085s. May 17 00:49:17.682267 login[1686]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 00:49:17.683724 login[1687]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:49:17.723853 systemd[1]: Created slice user-500.slice. May 17 00:49:17.724880 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:49:17.728454 systemd-logind[1545]: New session 2 of user core. May 17 00:49:17.769592 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:49:17.771165 systemd[1]: Starting user@500.service... May 17 00:49:17.804278 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:18.049264 systemd[1693]: Queued start job for default target default.target. May 17 00:49:18.049502 systemd[1693]: Reached target paths.target. May 17 00:49:18.049517 systemd[1693]: Reached target sockets.target. May 17 00:49:18.049528 systemd[1693]: Reached target timers.target. May 17 00:49:18.049539 systemd[1693]: Reached target basic.target. May 17 00:49:18.049685 systemd[1]: Started user@500.service. May 17 00:49:18.050386 systemd[1693]: Reached target default.target. May 17 00:49:18.050428 systemd[1693]: Startup finished in 240ms. May 17 00:49:18.050534 systemd[1]: Started session-2.scope. May 17 00:49:18.682597 login[1686]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:49:18.686999 systemd[1]: Started session-1.scope. May 17 00:49:18.687422 systemd-logind[1545]: New session 1 of user core. May 17 00:49:24.946291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:49:24.946465 systemd[1]: Stopped kubelet.service. May 17 00:49:24.948005 systemd[1]: Starting kubelet.service... May 17 00:49:25.372857 systemd[1]: Started kubelet.service. May 17 00:49:25.421130 kubelet[1725]: E0517 00:49:25.421073 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:25.423777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:25.423922 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:27.313533 waagent[1681]: 2025-05-17T00:49:27.313424Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:49:27.320696 waagent[1681]: 2025-05-17T00:49:27.320591Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:49:27.325412 waagent[1681]: 2025-05-17T00:49:27.325330Z INFO Daemon Daemon Python: 3.9.16 May 17 00:49:27.332854 waagent[1681]: 2025-05-17T00:49:27.332741Z INFO Daemon Daemon Run daemon May 17 00:49:27.337294 waagent[1681]: 2025-05-17T00:49:27.337202Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:49:27.355762 waagent[1681]: 2025-05-17T00:49:27.355598Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:49:27.370822 waagent[1681]: 2025-05-17T00:49:27.370676Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:49:27.380421 waagent[1681]: 2025-05-17T00:49:27.380337Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:49:27.385423 waagent[1681]: 2025-05-17T00:49:27.385345Z INFO Daemon Daemon Using waagent for provisioning May 17 00:49:27.391302 waagent[1681]: 2025-05-17T00:49:27.391227Z INFO Daemon Daemon Activate resource disk May 17 00:49:27.396178 waagent[1681]: 2025-05-17T00:49:27.396101Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:49:27.410603 waagent[1681]: 2025-05-17T00:49:27.410517Z INFO Daemon Daemon Found device: None May 17 00:49:27.415213 waagent[1681]: 2025-05-17T00:49:27.415134Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:49:27.423598 waagent[1681]: 2025-05-17T00:49:27.423519Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:49:27.435605 waagent[1681]: 2025-05-17T00:49:27.435530Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:49:27.441962 waagent[1681]: 2025-05-17T00:49:27.441884Z INFO Daemon Daemon Running default provisioning handler May 17 00:49:27.455119 waagent[1681]: 2025-05-17T00:49:27.454959Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:49:27.470150 waagent[1681]: 2025-05-17T00:49:27.470006Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:49:27.480348 waagent[1681]: 2025-05-17T00:49:27.480263Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:49:27.485754 waagent[1681]: 2025-05-17T00:49:27.485673Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:49:27.694904 waagent[1681]: 2025-05-17T00:49:27.694371Z INFO Daemon Daemon Successfully mounted dvd May 17 00:49:27.875680 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:49:27.910436 waagent[1681]: 2025-05-17T00:49:27.910273Z INFO Daemon Daemon Detect protocol endpoint May 17 00:49:27.915403 waagent[1681]: 2025-05-17T00:49:27.915312Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:49:27.921092 waagent[1681]: 2025-05-17T00:49:27.921016Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:49:27.927909 waagent[1681]: 2025-05-17T00:49:27.927824Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:49:27.933419 waagent[1681]: 2025-05-17T00:49:27.933345Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:49:27.938485 waagent[1681]: 2025-05-17T00:49:27.938410Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:49:28.165470 waagent[1681]: 2025-05-17T00:49:28.165399Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:49:28.172609 waagent[1681]: 2025-05-17T00:49:28.172558Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:49:28.178016 waagent[1681]: 2025-05-17T00:49:28.177926Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:49:30.037431 waagent[1681]: 2025-05-17T00:49:30.037275Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:49:30.054043 waagent[1681]: 2025-05-17T00:49:30.053955Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:49:30.059934 waagent[1681]: 2025-05-17T00:49:30.059844Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:49:30.241884 waagent[1681]: 2025-05-17T00:49:30.241745Z INFO Daemon Daemon Found private key matching thumbprint AD14D9A7513DAAB307AD5C820ECC572298BE50AA May 17 00:49:30.250577 waagent[1681]: 2025-05-17T00:49:30.250476Z INFO Daemon Daemon Certificate with thumbprint 78E922C216A74392950B0FA66488A3FF5EBA9A29 has no matching private key. May 17 00:49:30.260075 waagent[1681]: 2025-05-17T00:49:30.259981Z INFO Daemon Daemon Fetch goal state completed May 17 00:49:30.311692 waagent[1681]: 2025-05-17T00:49:30.311557Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: ca5fd7be-36e8-4e15-9fa5-afc9f6e95c1c New eTag: 2013284011989103726] May 17 00:49:30.322230 waagent[1681]: 2025-05-17T00:49:30.322139Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:49:30.341967 waagent[1681]: 2025-05-17T00:49:30.341905Z INFO Daemon Daemon Starting provisioning May 17 00:49:30.347644 waagent[1681]: 2025-05-17T00:49:30.347532Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:49:30.352829 waagent[1681]: 2025-05-17T00:49:30.352742Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-ce3994935d] May 17 00:49:30.393516 waagent[1681]: 2025-05-17T00:49:30.393390Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-ce3994935d] May 17 00:49:30.400151 waagent[1681]: 2025-05-17T00:49:30.400059Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:49:30.406700 waagent[1681]: 2025-05-17T00:49:30.406584Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:49:30.423087 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:49:30.423309 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:49:30.423360 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:49:30.423557 systemd[1]: Stopping systemd-networkd.service... May 17 00:49:30.428679 systemd-networkd[1278]: eth0: DHCPv6 lease lost May 17 00:49:30.428883 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.430477 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:49:30.430747 systemd[1]: Stopped systemd-networkd.service. May 17 00:49:30.432862 systemd[1]: Starting systemd-networkd.service... May 17 00:49:30.467487 systemd-networkd[1758]: enP58951s1: Link UP May 17 00:49:30.467506 systemd-networkd[1758]: enP58951s1: Gained carrier May 17 00:49:30.468440 systemd-networkd[1758]: eth0: Link UP May 17 00:49:30.468453 systemd-networkd[1758]: eth0: Gained carrier May 17 00:49:30.468913 systemd-networkd[1758]: lo: Link UP May 17 00:49:30.468927 systemd-networkd[1758]: lo: Gained carrier May 17 00:49:30.469164 systemd-networkd[1758]: eth0: Gained IPv6LL May 17 00:49:30.469398 systemd-networkd[1758]: Enumeration completed May 17 00:49:30.469887 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.470027 systemd-networkd[1758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:49:30.470386 systemd[1]: Started systemd-networkd.service. May 17 00:49:30.471116 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.472311 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:49:30.475784 waagent[1681]: 2025-05-17T00:49:30.475108Z INFO Daemon Daemon Create user account if not exists May 17 00:49:30.478339 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.479433 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.481393 waagent[1681]: 2025-05-17T00:49:30.481301Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:49:30.487895 waagent[1681]: 2025-05-17T00:49:30.487804Z INFO Daemon Daemon Configure sudoer May 17 00:49:30.492863 waagent[1681]: 2025-05-17T00:49:30.492776Z INFO Daemon Daemon Configure sshd May 17 00:49:30.497318 waagent[1681]: 2025-05-17T00:49:30.497236Z INFO Daemon Daemon Deploy ssh public key. May 17 00:49:30.502716 systemd-networkd[1758]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:49:30.503581 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.504745 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.513717 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:30.515372 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:49:31.788551 waagent[1681]: 2025-05-17T00:49:31.788466Z INFO Daemon Daemon Provisioning complete May 17 00:49:31.809912 waagent[1681]: 2025-05-17T00:49:31.809835Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:49:31.816441 waagent[1681]: 2025-05-17T00:49:31.816365Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:49:31.827308 waagent[1681]: 2025-05-17T00:49:31.827231Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:49:32.128594 waagent[1768]: 2025-05-17T00:49:32.128452Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:49:32.129693 waagent[1768]: 2025-05-17T00:49:32.129621Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:32.129920 waagent[1768]: 2025-05-17T00:49:32.129875Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:32.142304 waagent[1768]: 2025-05-17T00:49:32.142233Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:49:32.142617 waagent[1768]: 2025-05-17T00:49:32.142571Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:49:32.211392 waagent[1768]: 2025-05-17T00:49:32.211269Z INFO ExtHandler ExtHandler Found private key matching thumbprint AD14D9A7513DAAB307AD5C820ECC572298BE50AA May 17 00:49:32.211778 waagent[1768]: 2025-05-17T00:49:32.211727Z INFO ExtHandler ExtHandler Certificate with thumbprint 78E922C216A74392950B0FA66488A3FF5EBA9A29 has no matching private key. May 17 00:49:32.212089 waagent[1768]: 2025-05-17T00:49:32.212041Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:49:32.226643 waagent[1768]: 2025-05-17T00:49:32.226577Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: b04feb4c-ccda-4ecd-aaa3-5008f7771860 New eTag: 2013284011989103726] May 17 00:49:32.227339 waagent[1768]: 2025-05-17T00:49:32.227284Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:49:32.423529 waagent[1768]: 2025-05-17T00:49:32.423332Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:49:32.446686 waagent[1768]: 2025-05-17T00:49:32.446587Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1768 May 17 00:49:32.450607 waagent[1768]: 2025-05-17T00:49:32.450538Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:49:32.452121 waagent[1768]: 2025-05-17T00:49:32.452062Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:49:32.663397 waagent[1768]: 2025-05-17T00:49:32.663339Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:49:32.663980 waagent[1768]: 2025-05-17T00:49:32.663924Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:49:32.672074 waagent[1768]: 2025-05-17T00:49:32.672020Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:49:32.672797 waagent[1768]: 2025-05-17T00:49:32.672738Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:49:32.674170 waagent[1768]: 2025-05-17T00:49:32.674075Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:49:32.675642 waagent[1768]: 2025-05-17T00:49:32.675563Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:49:32.675949 waagent[1768]: 2025-05-17T00:49:32.675883Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:32.676461 waagent[1768]: 2025-05-17T00:49:32.676393Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:32.677086 waagent[1768]: 2025-05-17T00:49:32.677018Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:49:32.677412 waagent[1768]: 2025-05-17T00:49:32.677351Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:49:32.677412 waagent[1768]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:49:32.677412 waagent[1768]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:49:32.677412 waagent[1768]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:49:32.677412 waagent[1768]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:32.677412 waagent[1768]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:32.677412 waagent[1768]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:32.679723 waagent[1768]: 2025-05-17T00:49:32.679532Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:49:32.680067 waagent[1768]: 2025-05-17T00:49:32.679994Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:32.680861 waagent[1768]: 2025-05-17T00:49:32.680792Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:32.681462 waagent[1768]: 2025-05-17T00:49:32.681389Z INFO EnvHandler ExtHandler Configure routes May 17 00:49:32.681613 waagent[1768]: 2025-05-17T00:49:32.681566Z INFO EnvHandler ExtHandler Gateway:None May 17 00:49:32.681760 waagent[1768]: 2025-05-17T00:49:32.681716Z INFO EnvHandler ExtHandler Routes:None May 17 00:49:32.682656 waagent[1768]: 2025-05-17T00:49:32.682569Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:49:32.682804 waagent[1768]: 2025-05-17T00:49:32.682739Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:49:32.683534 waagent[1768]: 2025-05-17T00:49:32.683449Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:49:32.683741 waagent[1768]: 2025-05-17T00:49:32.683672Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:49:32.684027 waagent[1768]: 2025-05-17T00:49:32.683964Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:49:32.694814 waagent[1768]: 2025-05-17T00:49:32.694744Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:49:32.696177 waagent[1768]: 2025-05-17T00:49:32.696122Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:49:32.697229 waagent[1768]: 2025-05-17T00:49:32.697174Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:49:32.767537 waagent[1768]: 2025-05-17T00:49:32.767477Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:49:32.886911 waagent[1768]: 2025-05-17T00:49:32.886833Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1758' May 17 00:49:32.976562 waagent[1768]: 2025-05-17T00:49:32.976425Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:49:32.976562 waagent[1768]: Executing ['ip', '-a', '-o', 'link']: May 17 00:49:32.976562 waagent[1768]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:49:32.976562 waagent[1768]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:83:90 brd ff:ff:ff:ff:ff:ff May 17 00:49:32.976562 waagent[1768]: 3: enP58951s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:83:90 brd ff:ff:ff:ff:ff:ff\ altname enP58951p0s2 May 17 00:49:32.976562 waagent[1768]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:49:32.976562 waagent[1768]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:49:32.976562 waagent[1768]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:49:32.976562 waagent[1768]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:49:32.976562 waagent[1768]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:49:32.976562 waagent[1768]: 2: eth0 inet6 fe80::222:48ff:feb9:8390/64 scope link \ valid_lft forever preferred_lft forever May 17 00:49:33.154286 waagent[1768]: 2025-05-17T00:49:33.154227Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:49:33.831813 waagent[1681]: 2025-05-17T00:49:33.831692Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:49:33.836284 waagent[1681]: 2025-05-17T00:49:33.836228Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:49:35.203621 waagent[1800]: 2025-05-17T00:49:35.203528Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:49:35.207935 waagent[1800]: 2025-05-17T00:49:35.207868Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:49:35.208192 waagent[1800]: 2025-05-17T00:49:35.208144Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:49:35.208396 waagent[1800]: 2025-05-17T00:49:35.208352Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 17 00:49:35.221864 waagent[1800]: 2025-05-17T00:49:35.221765Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:49:35.222387 waagent[1800]: 2025-05-17T00:49:35.222336Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:35.222618 waagent[1800]: 2025-05-17T00:49:35.222573Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:35.222943 waagent[1800]: 2025-05-17T00:49:35.222893Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:49:35.236058 waagent[1800]: 2025-05-17T00:49:35.235994Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:49:35.247989 waagent[1800]: 2025-05-17T00:49:35.247938Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 17 00:49:35.249121 waagent[1800]: 2025-05-17T00:49:35.249068Z INFO ExtHandler May 17 00:49:35.249362 waagent[1800]: 2025-05-17T00:49:35.249314Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a16575e8-c0f5-43a3-b1c9-bb3f881c615e eTag: 2013284011989103726 source: Fabric] May 17 00:49:35.250189 waagent[1800]: 2025-05-17T00:49:35.250136Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:49:35.251466 waagent[1800]: 2025-05-17T00:49:35.251410Z INFO ExtHandler May 17 00:49:35.251709 waagent[1800]: 2025-05-17T00:49:35.251659Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:49:35.260032 waagent[1800]: 2025-05-17T00:49:35.259988Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:49:35.260563 waagent[1800]: 2025-05-17T00:49:35.260520Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:49:35.286493 waagent[1800]: 2025-05-17T00:49:35.286434Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:49:35.358719 waagent[1800]: 2025-05-17T00:49:35.358565Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AD14D9A7513DAAB307AD5C820ECC572298BE50AA', 'hasPrivateKey': True} May 17 00:49:35.359975 waagent[1800]: 2025-05-17T00:49:35.359919Z INFO ExtHandler Downloaded certificate {'thumbprint': '78E922C216A74392950B0FA66488A3FF5EBA9A29', 'hasPrivateKey': False} May 17 00:49:35.361153 waagent[1800]: 2025-05-17T00:49:35.361096Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:49:35.362136 waagent[1800]: 2025-05-17T00:49:35.362082Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:49:35.383487 waagent[1800]: 2025-05-17T00:49:35.383383Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:49:35.391602 waagent[1800]: 2025-05-17T00:49:35.391504Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:49:35.395194 waagent[1800]: 2025-05-17T00:49:35.395099Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:49:35.395494 waagent[1800]: 2025-05-17T00:49:35.395446Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:49:35.446298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:49:35.446481 systemd[1]: Stopped kubelet.service. May 17 00:49:35.447891 systemd[1]: Starting kubelet.service... May 17 00:49:36.005260 waagent[1800]: 2025-05-17T00:49:36.005130Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:49:36.005260 waagent[1800]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:36.005260 waagent[1800]: pkts bytes target prot opt in out source destination May 17 00:49:36.005260 waagent[1800]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:36.005260 waagent[1800]: pkts bytes target prot opt in out source destination May 17 00:49:36.005260 waagent[1800]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:49:36.005260 waagent[1800]: pkts bytes target prot opt in out source destination May 17 00:49:36.005260 waagent[1800]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:49:36.005260 waagent[1800]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:49:36.005260 waagent[1800]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:49:36.006335 waagent[1800]: 2025-05-17T00:49:36.006276Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:49:36.009204 waagent[1800]: 2025-05-17T00:49:36.009085Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:49:36.009451 waagent[1800]: 2025-05-17T00:49:36.009401Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:49:36.021854 waagent[1800]: 2025-05-17T00:49:36.021785Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:49:36.030280 systemd[1]: Started kubelet.service. May 17 00:49:36.033791 waagent[1800]: 2025-05-17T00:49:36.033491Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:49:36.034686 waagent[1800]: 2025-05-17T00:49:36.034077Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:49:36.045617 waagent[1800]: 2025-05-17T00:49:36.045517Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1800 May 17 00:49:36.049019 waagent[1800]: 2025-05-17T00:49:36.048931Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:49:36.049918 waagent[1800]: 2025-05-17T00:49:36.049853Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:49:36.050860 waagent[1800]: 2025-05-17T00:49:36.050792Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:49:36.053714 waagent[1800]: 2025-05-17T00:49:36.053650Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:49:36.055029 waagent[1800]: 2025-05-17T00:49:36.054959Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:49:36.055421 waagent[1800]: 2025-05-17T00:49:36.055362Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:36.055560 waagent[1800]: 2025-05-17T00:49:36.055513Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:36.056153 waagent[1800]: 2025-05-17T00:49:36.056055Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:49:36.056519 waagent[1800]: 2025-05-17T00:49:36.056457Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:49:36.057370 waagent[1800]: 2025-05-17T00:49:36.057310Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:49:36.057370 waagent[1800]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:49:36.057370 waagent[1800]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:49:36.057370 waagent[1800]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:49:36.057370 waagent[1800]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:36.057370 waagent[1800]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:36.057370 waagent[1800]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:49:36.059883 waagent[1800]: 2025-05-17T00:49:36.059806Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:49:36.060434 waagent[1800]: 2025-05-17T00:49:36.059535Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:49:36.064565 waagent[1800]: 2025-05-17T00:49:36.064401Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:49:36.065083 waagent[1800]: 2025-05-17T00:49:36.064986Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:49:36.065408 waagent[1800]: 2025-05-17T00:49:36.065338Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:49:36.065556 waagent[1800]: 2025-05-17T00:49:36.065490Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:49:36.067483 waagent[1800]: 2025-05-17T00:49:36.067400Z INFO EnvHandler ExtHandler Configure routes May 17 00:49:36.068965 waagent[1800]: 2025-05-17T00:49:36.068893Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:49:36.074900 waagent[1800]: 2025-05-17T00:49:36.074796Z INFO EnvHandler ExtHandler Gateway:None May 17 00:49:36.075062 waagent[1800]: 2025-05-17T00:49:36.075012Z INFO EnvHandler ExtHandler Routes:None May 17 00:49:36.086082 waagent[1800]: 2025-05-17T00:49:36.086000Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:49:36.086082 waagent[1800]: Executing ['ip', '-a', '-o', 'link']: May 17 00:49:36.086082 waagent[1800]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:49:36.086082 waagent[1800]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:83:90 brd ff:ff:ff:ff:ff:ff May 17 00:49:36.086082 waagent[1800]: 3: enP58951s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b9:83:90 brd ff:ff:ff:ff:ff:ff\ altname enP58951p0s2 May 17 00:49:36.086082 waagent[1800]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:49:36.086082 waagent[1800]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:49:36.086082 waagent[1800]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:49:36.086082 waagent[1800]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:49:36.086082 waagent[1800]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:49:36.086082 waagent[1800]: 2: eth0 inet6 fe80::222:48ff:feb9:8390/64 scope link \ valid_lft forever preferred_lft forever May 17 00:49:36.090290 waagent[1800]: 2025-05-17T00:49:36.090210Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:49:36.094850 kubelet[1838]: E0517 00:49:36.094812 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:36.096453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:36.096587 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:36.128024 waagent[1800]: 2025-05-17T00:49:36.127958Z INFO ExtHandler ExtHandler May 17 00:49:36.128321 waagent[1800]: 2025-05-17T00:49:36.128272Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 575e197a-c765-4c25-a6a3-9094d0eac5ff correlation e0faa9ac-7682-41f2-82fa-9faa0abcaa03 created: 2025-05-17T00:47:34.233727Z] May 17 00:49:36.129547 waagent[1800]: 2025-05-17T00:49:36.129489Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:49:36.131581 waagent[1800]: 2025-05-17T00:49:36.131529Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] May 17 00:49:36.154751 waagent[1800]: 2025-05-17T00:49:36.154687Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:49:36.157267 waagent[1800]: 2025-05-17T00:49:36.157166Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 53EACC99-07A5-4659-BB6F-20AA083BC588;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:49:36.219480 waagent[1800]: 2025-05-17T00:49:36.219380Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:49:36.235773 waagent[1800]: 2025-05-17T00:49:36.235703Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:49:46.196327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:49:46.196499 systemd[1]: Stopped kubelet.service. May 17 00:49:46.197905 systemd[1]: Starting kubelet.service... May 17 00:49:46.284692 systemd[1]: Started kubelet.service. May 17 00:49:46.397826 kubelet[1866]: E0517 00:49:46.397777 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:46.399616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:46.399766 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:49.517640 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 17 00:49:51.417721 systemd[1]: Created slice system-sshd.slice. May 17 00:49:51.418974 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:33688.service. May 17 00:49:52.073905 sshd[1873]: Accepted publickey for core from 10.200.16.10 port 33688 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:52.093142 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:52.097607 systemd[1]: Started session-3.scope. May 17 00:49:52.097967 systemd-logind[1545]: New session 3 of user core. May 17 00:49:52.489485 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:33700.service. May 17 00:49:52.941445 sshd[1878]: Accepted publickey for core from 10.200.16.10 port 33700 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:52.942587 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:52.947076 systemd[1]: Started session-4.scope. May 17 00:49:52.948060 systemd-logind[1545]: New session 4 of user core. May 17 00:49:53.286789 sshd[1878]: pam_unix(sshd:session): session closed for user core May 17 00:49:53.289419 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:33700.service: Deactivated successfully. May 17 00:49:53.290355 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. May 17 00:49:53.290412 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:49:53.291615 systemd-logind[1545]: Removed session 4. May 17 00:49:53.364972 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:33716.service. May 17 00:49:53.844669 sshd[1885]: Accepted publickey for core from 10.200.16.10 port 33716 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:53.846250 sshd[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:53.850028 systemd-logind[1545]: New session 5 of user core. May 17 00:49:53.850440 systemd[1]: Started session-5.scope. May 17 00:49:54.204350 sshd[1885]: pam_unix(sshd:session): session closed for user core May 17 00:49:54.207254 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:33716.service: Deactivated successfully. May 17 00:49:54.207997 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:49:54.211350 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. May 17 00:49:54.212181 systemd-logind[1545]: Removed session 5. May 17 00:49:54.276656 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:33728.service. May 17 00:49:54.721041 sshd[1892]: Accepted publickey for core from 10.200.16.10 port 33728 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:54.722780 sshd[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:54.727508 systemd[1]: Started session-6.scope. May 17 00:49:54.728730 systemd-logind[1545]: New session 6 of user core. May 17 00:49:55.045971 sshd[1892]: pam_unix(sshd:session): session closed for user core May 17 00:49:55.048337 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:33728.service: Deactivated successfully. May 17 00:49:55.049068 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:49:55.050139 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. May 17 00:49:55.050949 systemd-logind[1545]: Removed session 6. May 17 00:49:55.143364 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:33736.service. May 17 00:49:55.628410 sshd[1899]: Accepted publickey for core from 10.200.16.10 port 33736 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:55.629981 sshd[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:55.633811 systemd-logind[1545]: New session 7 of user core. May 17 00:49:55.634223 systemd[1]: Started session-7.scope. May 17 00:49:56.217956 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:49:56.218505 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:49:56.240161 systemd[1]: Starting docker.service... May 17 00:49:56.277465 env[1913]: time="2025-05-17T00:49:56.277415000Z" level=info msg="Starting up" May 17 00:49:56.286913 env[1913]: time="2025-05-17T00:49:56.286865520Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:49:56.286913 env[1913]: time="2025-05-17T00:49:56.286904960Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:49:56.287062 env[1913]: time="2025-05-17T00:49:56.286930240Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:49:56.287062 env[1913]: time="2025-05-17T00:49:56.286942000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:49:56.288838 env[1913]: time="2025-05-17T00:49:56.288815200Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:49:56.288961 env[1913]: time="2025-05-17T00:49:56.288944320Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:49:56.289037 env[1913]: time="2025-05-17T00:49:56.289020120Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:49:56.289100 env[1913]: time="2025-05-17T00:49:56.289085480Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:49:56.388958 env[1913]: time="2025-05-17T00:49:56.388920160Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 00:49:56.389194 env[1913]: time="2025-05-17T00:49:56.389179760Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 00:49:56.389578 env[1913]: time="2025-05-17T00:49:56.389399240Z" level=info msg="Loading containers: start." May 17 00:49:56.446326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:49:56.446498 systemd[1]: Stopped kubelet.service. May 17 00:49:56.448050 systemd[1]: Starting kubelet.service... May 17 00:49:56.578981 systemd[1]: Started kubelet.service. May 17 00:49:56.647704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:56.647842 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:57.009086 kubelet[1958]: E0517 00:49:56.646054 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/ May 17 00:49:57.009086 kubelet[1958]: lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:57.093650 kernel: Initializing XFRM netlink socket May 17 00:49:57.115992 env[1913]: time="2025-05-17T00:49:57.115938080Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:49:57.118232 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. May 17 00:49:57.286051 systemd-networkd[1758]: docker0: Link UP May 17 00:49:57.308832 env[1913]: time="2025-05-17T00:49:57.308792680Z" level=info msg="Loading containers: done." May 17 00:49:57.319999 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck764796241-merged.mount: Deactivated successfully. May 17 00:49:57.346481 systemd-timesyncd[1497]: Contacted time server 23.157.160.168:123 (0.flatcar.pool.ntp.org). May 17 00:49:57.346561 systemd-timesyncd[1497]: Initial clock synchronization to Sat 2025-05-17 00:49:57.346897 UTC. May 17 00:49:57.349437 env[1913]: time="2025-05-17T00:49:57.349400320Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:49:57.349815 env[1913]: time="2025-05-17T00:49:57.349790720Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:49:57.350018 env[1913]: time="2025-05-17T00:49:57.350004320Z" level=info msg="Daemon has completed initialization" May 17 00:49:57.376656 systemd[1]: Started docker.service. May 17 00:49:57.385337 env[1913]: time="2025-05-17T00:49:57.385150480Z" level=info msg="API listen on /run/docker.sock" May 17 00:49:59.517584 env[1557]: time="2025-05-17T00:49:59.517539846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:49:59.780806 update_engine[1548]: I0517 00:49:59.778764 1548 update_attempter.cc:509] Updating boot flags... May 17 00:50:00.379685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745768287.mount: Deactivated successfully. May 17 00:50:01.793167 env[1557]: time="2025-05-17T00:50:01.793120778Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:01.801115 env[1557]: time="2025-05-17T00:50:01.801074576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:01.804780 env[1557]: time="2025-05-17T00:50:01.804744406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:01.808692 env[1557]: time="2025-05-17T00:50:01.808656923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:01.809340 env[1557]: time="2025-05-17T00:50:01.809310903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 00:50:01.810585 env[1557]: time="2025-05-17T00:50:01.810558780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:50:03.254048 env[1557]: time="2025-05-17T00:50:03.254000581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:03.260392 env[1557]: time="2025-05-17T00:50:03.260354108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:03.266198 env[1557]: time="2025-05-17T00:50:03.266145141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:03.270002 env[1557]: time="2025-05-17T00:50:03.269965761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:03.270666 env[1557]: time="2025-05-17T00:50:03.270621538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 00:50:03.271733 env[1557]: time="2025-05-17T00:50:03.271700527Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:50:04.490728 env[1557]: time="2025-05-17T00:50:04.490672178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.496806 env[1557]: time="2025-05-17T00:50:04.496750447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.500874 env[1557]: time="2025-05-17T00:50:04.500834508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.505509 env[1557]: time="2025-05-17T00:50:04.505447302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:04.506274 env[1557]: time="2025-05-17T00:50:04.506248242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 00:50:04.506812 env[1557]: time="2025-05-17T00:50:04.506783255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:50:05.687886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706041308.mount: Deactivated successfully. May 17 00:50:06.193726 env[1557]: time="2025-05-17T00:50:06.193680930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:06.199228 env[1557]: time="2025-05-17T00:50:06.199188249Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:06.203723 env[1557]: time="2025-05-17T00:50:06.203685386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:06.207001 env[1557]: time="2025-05-17T00:50:06.206965337Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:06.208068 env[1557]: time="2025-05-17T00:50:06.208038361Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 00:50:06.208802 env[1557]: time="2025-05-17T00:50:06.208773457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:50:06.696298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:50:06.696463 systemd[1]: Stopped kubelet.service. May 17 00:50:06.698016 systemd[1]: Starting kubelet.service... May 17 00:50:07.236299 systemd[1]: Started kubelet.service. May 17 00:50:07.279066 kubelet[2089]: E0517 00:50:07.279012 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:50:07.280855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:50:07.280999 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:50:07.321473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858467768.mount: Deactivated successfully. May 17 00:50:10.512416 env[1557]: time="2025-05-17T00:50:10.512360387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:10.520461 env[1557]: time="2025-05-17T00:50:10.520414042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:10.524459 env[1557]: time="2025-05-17T00:50:10.524422749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:10.528223 env[1557]: time="2025-05-17T00:50:10.528189172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:10.528968 env[1557]: time="2025-05-17T00:50:10.528940264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:50:10.529941 env[1557]: time="2025-05-17T00:50:10.529907761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:50:11.138963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460211999.mount: Deactivated successfully. May 17 00:50:11.167430 env[1557]: time="2025-05-17T00:50:11.167389777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:11.174831 env[1557]: time="2025-05-17T00:50:11.174779813Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:11.177970 env[1557]: time="2025-05-17T00:50:11.177924702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:11.181668 env[1557]: time="2025-05-17T00:50:11.181619000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:11.182235 env[1557]: time="2025-05-17T00:50:11.182203649Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:50:11.182756 env[1557]: time="2025-05-17T00:50:11.182727657Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:50:11.742103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534648440.mount: Deactivated successfully. May 17 00:50:13.925738 env[1557]: time="2025-05-17T00:50:13.925692963Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:13.933608 env[1557]: time="2025-05-17T00:50:13.933565112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:13.937808 env[1557]: time="2025-05-17T00:50:13.937773370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:13.943380 env[1557]: time="2025-05-17T00:50:13.943347327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:13.944305 env[1557]: time="2025-05-17T00:50:13.944275060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 00:50:17.446325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:50:17.446500 systemd[1]: Stopped kubelet.service. May 17 00:50:17.447956 systemd[1]: Starting kubelet.service... May 17 00:50:17.540737 systemd[1]: Started kubelet.service. May 17 00:50:17.602221 kubelet[2120]: E0517 00:50:17.602178 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:50:17.604154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:50:17.604290 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:50:18.957827 systemd[1]: Stopped kubelet.service. May 17 00:50:18.960145 systemd[1]: Starting kubelet.service... May 17 00:50:18.987921 systemd[1]: Reloading. May 17 00:50:19.076531 /usr/lib/systemd/system-generators/torcx-generator[2156]: time="2025-05-17T00:50:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:50:19.076875 /usr/lib/systemd/system-generators/torcx-generator[2156]: time="2025-05-17T00:50:19Z" level=info msg="torcx already run" May 17 00:50:19.163918 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:50:19.163938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:50:19.180920 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:50:19.274142 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:50:19.274398 systemd[1]: Stopped kubelet.service. May 17 00:50:19.276573 systemd[1]: Starting kubelet.service... May 17 00:50:19.500240 systemd[1]: Started kubelet.service. May 17 00:50:19.538139 kubelet[2235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:19.538139 kubelet[2235]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:50:19.538139 kubelet[2235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:19.538139 kubelet[2235]: I0517 00:50:19.537888 2235 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:50:20.681953 kubelet[2235]: I0517 00:50:20.681919 2235 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:50:20.682314 kubelet[2235]: I0517 00:50:20.682302 2235 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:50:20.682991 kubelet[2235]: I0517 00:50:20.682971 2235 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:50:20.705585 kubelet[2235]: E0517 00:50:20.705538 2235 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:20.706795 kubelet[2235]: I0517 00:50:20.706774 2235 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:50:20.712277 kubelet[2235]: E0517 00:50:20.712210 2235 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:50:20.712410 kubelet[2235]: I0517 00:50:20.712396 2235 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:50:20.716268 kubelet[2235]: I0517 00:50:20.716246 2235 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:50:20.716661 kubelet[2235]: I0517 00:50:20.716645 2235 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:50:20.716893 kubelet[2235]: I0517 00:50:20.716864 2235 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:50:20.717168 kubelet[2235]: I0517 00:50:20.716964 2235 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-ce3994935d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:50:20.717306 kubelet[2235]: I0517 00:50:20.717294 2235 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:50:20.717365 kubelet[2235]: I0517 00:50:20.717357 2235 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:50:20.717528 kubelet[2235]: I0517 00:50:20.717516 2235 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:20.722556 kubelet[2235]: I0517 00:50:20.722534 2235 kubelet.go:408] "Attempting to sync node with API server" May 17 00:50:20.722709 kubelet[2235]: I0517 00:50:20.722695 2235 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:50:20.722795 kubelet[2235]: W0517 00:50:20.722670 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ce3994935d&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:20.722836 kubelet[2235]: E0517 00:50:20.722809 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ce3994935d&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:20.722878 kubelet[2235]: I0517 00:50:20.722867 2235 kubelet.go:314] "Adding apiserver pod source" May 17 00:50:20.722934 kubelet[2235]: I0517 00:50:20.722924 2235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:50:20.727549 kubelet[2235]: I0517 00:50:20.727530 2235 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:50:20.728149 kubelet[2235]: I0517 00:50:20.728133 2235 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:50:20.728286 kubelet[2235]: W0517 00:50:20.728275 2235 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:50:20.728879 kubelet[2235]: I0517 00:50:20.728862 2235 server.go:1274] "Started kubelet" May 17 00:50:20.729104 kubelet[2235]: W0517 00:50:20.729071 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:20.729211 kubelet[2235]: E0517 00:50:20.729192 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:20.730753 kubelet[2235]: I0517 00:50:20.730591 2235 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:50:20.731595 kubelet[2235]: I0517 00:50:20.731532 2235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:50:20.731881 kubelet[2235]: I0517 00:50:20.731854 2235 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:50:20.739452 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:50:20.739543 kubelet[2235]: I0517 00:50:20.732764 2235 server.go:449] "Adding debug handlers to kubelet server" May 17 00:50:20.739844 kubelet[2235]: I0517 00:50:20.739802 2235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:50:20.743398 kubelet[2235]: I0517 00:50:20.743365 2235 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:50:20.744502 kubelet[2235]: I0517 00:50:20.744482 2235 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:50:20.744733 kubelet[2235]: E0517 00:50:20.744708 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-ce3994935d\" not found" May 17 00:50:20.745544 kubelet[2235]: I0517 00:50:20.745520 2235 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:50:20.745604 kubelet[2235]: I0517 00:50:20.745598 2235 reconciler.go:26] "Reconciler: start to sync state" May 17 00:50:20.748043 kubelet[2235]: E0517 00:50:20.746974 2235 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-ce3994935d.18402a2b43c73796 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-ce3994935d,UID:ci-3510.3.7-n-ce3994935d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-ce3994935d,},FirstTimestamp:2025-05-17 00:50:20.728842134 +0000 UTC m=+1.224087957,LastTimestamp:2025-05-17 00:50:20.728842134 +0000 UTC m=+1.224087957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-ce3994935d,}" May 17 00:50:20.748392 kubelet[2235]: W0517 00:50:20.748346 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:20.748457 kubelet[2235]: E0517 00:50:20.748407 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:20.748485 kubelet[2235]: E0517 00:50:20.748470 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ce3994935d?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="200ms" May 17 00:50:20.748765 kubelet[2235]: I0517 00:50:20.748739 2235 factory.go:221] Registration of the systemd container factory successfully May 17 00:50:20.748845 kubelet[2235]: I0517 00:50:20.748823 2235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:50:20.750813 kubelet[2235]: I0517 00:50:20.750786 2235 factory.go:221] Registration of the containerd container factory successfully May 17 00:50:20.823612 kubelet[2235]: I0517 00:50:20.823564 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:50:20.824839 kubelet[2235]: I0517 00:50:20.824821 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:50:20.825009 kubelet[2235]: I0517 00:50:20.824990 2235 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:50:20.825113 kubelet[2235]: I0517 00:50:20.825102 2235 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:50:20.825233 kubelet[2235]: E0517 00:50:20.825215 2235 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:50:20.826517 kubelet[2235]: W0517 00:50:20.826488 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:20.826707 kubelet[2235]: E0517 00:50:20.826688 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:20.845222 kubelet[2235]: E0517 00:50:20.845203 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-ce3994935d\" not found" May 17 00:50:20.925659 kubelet[2235]: E0517 00:50:20.925617 2235 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:50:20.936616 kubelet[2235]: I0517 00:50:20.935360 2235 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:50:20.936713 kubelet[2235]: I0517 00:50:20.936662 2235 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:50:20.936713 kubelet[2235]: I0517 00:50:20.936692 2235 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:20.941780 kubelet[2235]: I0517 00:50:20.941757 2235 policy_none.go:49] "None policy: Start" May 17 00:50:20.942510 kubelet[2235]: I0517 00:50:20.942490 2235 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:50:20.942567 kubelet[2235]: I0517 00:50:20.942518 2235 state_mem.go:35] "Initializing new in-memory state store" May 17 00:50:20.946719 kubelet[2235]: E0517 00:50:20.946694 2235 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-ce3994935d\" not found" May 17 00:50:20.950805 kubelet[2235]: I0517 00:50:20.950780 2235 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:50:20.950944 kubelet[2235]: I0517 00:50:20.950926 2235 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:50:20.950976 kubelet[2235]: I0517 00:50:20.950944 2235 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:50:20.952336 kubelet[2235]: I0517 00:50:20.952314 2235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:50:20.952801 kubelet[2235]: E0517 00:50:20.952770 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ce3994935d?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="400ms" May 17 00:50:20.955721 kubelet[2235]: E0517 00:50:20.955696 2235 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-ce3994935d\" not found" May 17 00:50:21.052503 kubelet[2235]: I0517 00:50:21.052474 2235 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:21.052832 kubelet[2235]: E0517 00:50:21.052807 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147160 kubelet[2235]: I0517 00:50:21.147122 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31d358a36d2d53cc881b882b25bff001-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" (UID: \"31d358a36d2d53cc881b882b25bff001\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147345 kubelet[2235]: I0517 00:50:21.147329 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147426 kubelet[2235]: I0517 00:50:21.147413 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147509 kubelet[2235]: I0517 00:50:21.147497 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147857 kubelet[2235]: I0517 00:50:21.147825 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6847fbb63a9714fdb0d72514d6de7d90-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-ce3994935d\" (UID: \"6847fbb63a9714fdb0d72514d6de7d90\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147918 kubelet[2235]: I0517 00:50:21.147872 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31d358a36d2d53cc881b882b25bff001-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" (UID: \"31d358a36d2d53cc881b882b25bff001\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147918 kubelet[2235]: I0517 00:50:21.147890 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31d358a36d2d53cc881b882b25bff001-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" (UID: \"31d358a36d2d53cc881b882b25bff001\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147918 kubelet[2235]: I0517 00:50:21.147911 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.147987 kubelet[2235]: I0517 00:50:21.147935 2235 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:21.254560 kubelet[2235]: I0517 00:50:21.254529 2235 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:21.254928 kubelet[2235]: E0517 00:50:21.254903 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:21.353746 kubelet[2235]: E0517 00:50:21.353714 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ce3994935d?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="800ms" May 17 00:50:21.433653 env[1557]: time="2025-05-17T00:50:21.433483729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-ce3994935d,Uid:31d358a36d2d53cc881b882b25bff001,Namespace:kube-system,Attempt:0,}" May 17 00:50:21.435649 env[1557]: time="2025-05-17T00:50:21.435492466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-ce3994935d,Uid:585eba96ef8976ec6422800bdb906114,Namespace:kube-system,Attempt:0,}" May 17 00:50:21.436201 env[1557]: time="2025-05-17T00:50:21.436156991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-ce3994935d,Uid:6847fbb63a9714fdb0d72514d6de7d90,Namespace:kube-system,Attempt:0,}" May 17 00:50:21.632449 kubelet[2235]: W0517 00:50:21.632318 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ce3994935d&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:21.632449 kubelet[2235]: E0517 00:50:21.632387 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-ce3994935d&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:21.656517 kubelet[2235]: I0517 00:50:21.656484 2235 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:21.656814 kubelet[2235]: E0517 00:50:21.656792 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:21.803558 kubelet[2235]: W0517 00:50:21.803498 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:21.803913 kubelet[2235]: E0517 00:50:21.803562 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:22.057730 kubelet[2235]: W0517 00:50:22.057613 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:22.057730 kubelet[2235]: E0517 00:50:22.057699 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:22.094320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004570476.mount: Deactivated successfully. May 17 00:50:22.141699 env[1557]: time="2025-05-17T00:50:22.141654536Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.154549 kubelet[2235]: E0517 00:50:22.154510 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-ce3994935d?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="1.6s" May 17 00:50:22.165534 env[1557]: time="2025-05-17T00:50:22.165495240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.175614 env[1557]: time="2025-05-17T00:50:22.175580598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.182175 env[1557]: time="2025-05-17T00:50:22.182142128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.189256 env[1557]: time="2025-05-17T00:50:22.189227623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.194336 env[1557]: time="2025-05-17T00:50:22.194302942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.202058 env[1557]: time="2025-05-17T00:50:22.202019562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.208012 env[1557]: time="2025-05-17T00:50:22.207979328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.217368 env[1557]: time="2025-05-17T00:50:22.217331040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.228524 env[1557]: time="2025-05-17T00:50:22.228491966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.238028 env[1557]: time="2025-05-17T00:50:22.237987279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.255900 env[1557]: time="2025-05-17T00:50:22.255867737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:22.261637 env[1557]: time="2025-05-17T00:50:22.261554941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:22.261637 env[1557]: time="2025-05-17T00:50:22.261594301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:22.262506 env[1557]: time="2025-05-17T00:50:22.261604381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:22.262506 env[1557]: time="2025-05-17T00:50:22.261843103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6893ac0968fbad3e5239b720c4395165a7d9076a39e7674b91a3149db5193ce pid=2275 runtime=io.containerd.runc.v2 May 17 00:50:22.303457 env[1557]: time="2025-05-17T00:50:22.303413944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-ce3994935d,Uid:31d358a36d2d53cc881b882b25bff001,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6893ac0968fbad3e5239b720c4395165a7d9076a39e7674b91a3149db5193ce\"" May 17 00:50:22.306219 env[1557]: time="2025-05-17T00:50:22.306192326Z" level=info msg="CreateContainer within sandbox \"a6893ac0968fbad3e5239b720c4395165a7d9076a39e7674b91a3149db5193ce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:50:22.321022 env[1557]: time="2025-05-17T00:50:22.316542885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:22.321022 env[1557]: time="2025-05-17T00:50:22.316579886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:22.321022 env[1557]: time="2025-05-17T00:50:22.316590286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:22.321022 env[1557]: time="2025-05-17T00:50:22.316768727Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2470a01744a17e5cd7237634d8bdaed19dbfddee9b548c575a83f87328f6c4ed pid=2314 runtime=io.containerd.runc.v2 May 17 00:50:22.322238 env[1557]: time="2025-05-17T00:50:22.322072008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:22.322238 env[1557]: time="2025-05-17T00:50:22.322105768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:22.322238 env[1557]: time="2025-05-17T00:50:22.322116288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:22.323282 env[1557]: time="2025-05-17T00:50:22.323180617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd7715e7d3eb883611689e96abe6e0e991b48da2f461a16a17e2be89aa34ae60 pid=2334 runtime=io.containerd.runc.v2 May 17 00:50:22.338261 kubelet[2235]: W0517 00:50:22.338167 2235 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused May 17 00:50:22.338261 kubelet[2235]: E0517 00:50:22.338233 2235 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.21:6443: connect: connection refused" logger="UnhandledError" May 17 00:50:22.376785 env[1557]: time="2025-05-17T00:50:22.374466412Z" level=info msg="CreateContainer within sandbox \"a6893ac0968fbad3e5239b720c4395165a7d9076a39e7674b91a3149db5193ce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23e3f6997b43942ee5b58dfb65a7820cb9f8a3e58844e6612c64a413f9aaae5e\"" May 17 00:50:22.377152 env[1557]: time="2025-05-17T00:50:22.377127393Z" level=info msg="StartContainer for \"23e3f6997b43942ee5b58dfb65a7820cb9f8a3e58844e6612c64a413f9aaae5e\"" May 17 00:50:22.383301 env[1557]: time="2025-05-17T00:50:22.383250920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-ce3994935d,Uid:6847fbb63a9714fdb0d72514d6de7d90,Namespace:kube-system,Attempt:0,} returns sandbox id \"2470a01744a17e5cd7237634d8bdaed19dbfddee9b548c575a83f87328f6c4ed\"" May 17 00:50:22.385394 env[1557]: time="2025-05-17T00:50:22.385354136Z" level=info msg="CreateContainer within sandbox \"2470a01744a17e5cd7237634d8bdaed19dbfddee9b548c575a83f87328f6c4ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:50:22.411475 env[1557]: time="2025-05-17T00:50:22.411434698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-ce3994935d,Uid:585eba96ef8976ec6422800bdb906114,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd7715e7d3eb883611689e96abe6e0e991b48da2f461a16a17e2be89aa34ae60\"" May 17 00:50:22.414399 env[1557]: time="2025-05-17T00:50:22.414369120Z" level=info msg="CreateContainer within sandbox \"bd7715e7d3eb883611689e96abe6e0e991b48da2f461a16a17e2be89aa34ae60\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:50:22.419239 kubelet[2235]: E0517 00:50:22.419133 2235 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-ce3994935d.18402a2b43c73796 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-ce3994935d,UID:ci-3510.3.7-n-ce3994935d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-ce3994935d,},FirstTimestamp:2025-05-17 00:50:20.728842134 +0000 UTC m=+1.224087957,LastTimestamp:2025-05-17 00:50:20.728842134 +0000 UTC m=+1.224087957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-ce3994935d,}" May 17 00:50:22.448192 env[1557]: time="2025-05-17T00:50:22.448135861Z" level=info msg="StartContainer for \"23e3f6997b43942ee5b58dfb65a7820cb9f8a3e58844e6612c64a413f9aaae5e\" returns successfully" May 17 00:50:22.450987 env[1557]: time="2025-05-17T00:50:22.450944522Z" level=info msg="CreateContainer within sandbox \"2470a01744a17e5cd7237634d8bdaed19dbfddee9b548c575a83f87328f6c4ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"84dcde8f3fa7088f848d8e40abd122dfc07962158d7c1442c3c6e154221aa591\"" May 17 00:50:22.451478 env[1557]: time="2025-05-17T00:50:22.451455286Z" level=info msg="StartContainer for \"84dcde8f3fa7088f848d8e40abd122dfc07962158d7c1442c3c6e154221aa591\"" May 17 00:50:22.459184 kubelet[2235]: I0517 00:50:22.458899 2235 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:22.459287 kubelet[2235]: E0517 00:50:22.459236 2235 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:22.483029 env[1557]: time="2025-05-17T00:50:22.482976650Z" level=info msg="CreateContainer within sandbox \"bd7715e7d3eb883611689e96abe6e0e991b48da2f461a16a17e2be89aa34ae60\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca372fbdfda4454f65e6fa05fabbf42efd0c8c5ee4068b108d75b20c0d6feaff\"" May 17 00:50:22.483459 env[1557]: time="2025-05-17T00:50:22.483427533Z" level=info msg="StartContainer for \"ca372fbdfda4454f65e6fa05fabbf42efd0c8c5ee4068b108d75b20c0d6feaff\"" May 17 00:50:22.524560 env[1557]: time="2025-05-17T00:50:22.524487650Z" level=info msg="StartContainer for \"84dcde8f3fa7088f848d8e40abd122dfc07962158d7c1442c3c6e154221aa591\" returns successfully" May 17 00:50:22.562825 env[1557]: time="2025-05-17T00:50:22.562776785Z" level=info msg="StartContainer for \"ca372fbdfda4454f65e6fa05fabbf42efd0c8c5ee4068b108d75b20c0d6feaff\" returns successfully" May 17 00:50:24.061392 kubelet[2235]: I0517 00:50:24.061363 2235 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:25.449588 kubelet[2235]: E0517 00:50:25.449547 2235 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-ce3994935d\" not found" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:25.544345 kubelet[2235]: I0517 00:50:25.544306 2235 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:25.729104 kubelet[2235]: I0517 00:50:25.729009 2235 apiserver.go:52] "Watching apiserver" May 17 00:50:25.746150 kubelet[2235]: I0517 00:50:25.746083 2235 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:50:27.564060 systemd[1]: Reloading. May 17 00:50:27.635708 /usr/lib/systemd/system-generators/torcx-generator[2528]: time="2025-05-17T00:50:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:50:27.637354 /usr/lib/systemd/system-generators/torcx-generator[2528]: time="2025-05-17T00:50:27Z" level=info msg="torcx already run" May 17 00:50:27.732992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:50:27.733148 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:50:27.751208 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:50:27.854235 systemd[1]: Stopping kubelet.service... May 17 00:50:27.875076 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:50:27.875513 systemd[1]: Stopped kubelet.service. May 17 00:50:27.877831 systemd[1]: Starting kubelet.service... May 17 00:50:27.967457 systemd[1]: Started kubelet.service. May 17 00:50:28.008198 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:28.008198 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:50:28.008198 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:50:28.008563 kubelet[2603]: I0517 00:50:28.008267 2603 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:50:28.013823 kubelet[2603]: I0517 00:50:28.013796 2603 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:50:28.014017 kubelet[2603]: I0517 00:50:28.014006 2603 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:50:28.014296 kubelet[2603]: I0517 00:50:28.014280 2603 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:50:28.015696 kubelet[2603]: I0517 00:50:28.015678 2603 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:50:28.018696 kubelet[2603]: I0517 00:50:28.018669 2603 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:50:28.022531 kubelet[2603]: E0517 00:50:28.022494 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:50:28.022531 kubelet[2603]: I0517 00:50:28.022530 2603 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:50:28.025224 kubelet[2603]: I0517 00:50:28.025198 2603 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:50:28.025594 kubelet[2603]: I0517 00:50:28.025573 2603 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:50:28.025745 kubelet[2603]: I0517 00:50:28.025716 2603 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:50:28.025904 kubelet[2603]: I0517 00:50:28.025743 2603 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-ce3994935d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:50:28.025986 kubelet[2603]: I0517 00:50:28.025910 2603 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:50:28.025986 kubelet[2603]: I0517 00:50:28.025918 2603 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:50:28.025986 kubelet[2603]: I0517 00:50:28.025954 2603 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:28.026071 kubelet[2603]: I0517 00:50:28.026045 2603 kubelet.go:408] "Attempting to sync node with API server" May 17 00:50:28.026071 kubelet[2603]: I0517 00:50:28.026057 2603 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:50:28.026113 kubelet[2603]: I0517 00:50:28.026075 2603 kubelet.go:314] "Adding apiserver pod source" May 17 00:50:28.026113 kubelet[2603]: I0517 00:50:28.026088 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:50:28.036847 kubelet[2603]: I0517 00:50:28.031028 2603 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:50:28.036847 kubelet[2603]: I0517 00:50:28.031489 2603 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:50:28.036847 kubelet[2603]: I0517 00:50:28.031861 2603 server.go:1274] "Started kubelet" May 17 00:50:28.036847 kubelet[2603]: I0517 00:50:28.033478 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.046225 2603 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.046970 2603 server.go:449] "Adding debug handlers to kubelet server" May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.047811 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.047987 2603 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.048145 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.053264 2603 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.053481 2603 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:50:28.056230 kubelet[2603]: I0517 00:50:28.053595 2603 reconciler.go:26] "Reconciler: start to sync state" May 17 00:50:28.058496 kubelet[2603]: E0517 00:50:28.058325 2603 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:50:28.061771 kubelet[2603]: I0517 00:50:28.061741 2603 factory.go:221] Registration of the systemd container factory successfully May 17 00:50:28.061870 kubelet[2603]: I0517 00:50:28.061845 2603 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:50:28.066698 kubelet[2603]: I0517 00:50:28.064909 2603 factory.go:221] Registration of the containerd container factory successfully May 17 00:50:28.081801 kubelet[2603]: I0517 00:50:28.081761 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:50:28.082837 kubelet[2603]: I0517 00:50:28.082813 2603 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:50:28.082887 kubelet[2603]: I0517 00:50:28.082840 2603 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:50:28.082887 kubelet[2603]: I0517 00:50:28.082860 2603 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:50:28.082934 kubelet[2603]: E0517 00:50:28.082901 2603 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.126804 2603 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.126827 2603 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.126851 2603 state_mem.go:36] "Initialized new in-memory state store" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.126992 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.127002 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.127020 2603 policy_none.go:49] "None policy: Start" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.127733 2603 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.127755 2603 state_mem.go:35] "Initializing new in-memory state store" May 17 00:50:28.129107 kubelet[2603]: I0517 00:50:28.127889 2603 state_mem.go:75] "Updated machine memory state" May 17 00:50:28.131826 kubelet[2603]: I0517 00:50:28.131023 2603 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:50:28.133216 kubelet[2603]: I0517 00:50:28.132491 2603 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:50:28.133216 kubelet[2603]: I0517 00:50:28.132507 2603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:50:28.135353 kubelet[2603]: I0517 00:50:28.135336 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:50:28.196615 kubelet[2603]: W0517 00:50:28.196564 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:28.202295 kubelet[2603]: W0517 00:50:28.202264 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:28.202488 kubelet[2603]: W0517 00:50:28.202466 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:28.245979 kubelet[2603]: I0517 00:50:28.245939 2603 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256146 kubelet[2603]: I0517 00:50:28.256109 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31d358a36d2d53cc881b882b25bff001-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" (UID: \"31d358a36d2d53cc881b882b25bff001\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256146 kubelet[2603]: I0517 00:50:28.256146 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31d358a36d2d53cc881b882b25bff001-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" (UID: \"31d358a36d2d53cc881b882b25bff001\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256299 kubelet[2603]: I0517 00:50:28.256204 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256299 kubelet[2603]: I0517 00:50:28.256225 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256299 kubelet[2603]: I0517 00:50:28.256270 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6847fbb63a9714fdb0d72514d6de7d90-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-ce3994935d\" (UID: \"6847fbb63a9714fdb0d72514d6de7d90\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256299 kubelet[2603]: I0517 00:50:28.256288 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31d358a36d2d53cc881b882b25bff001-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" (UID: \"31d358a36d2d53cc881b882b25bff001\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256400 kubelet[2603]: I0517 00:50:28.256303 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256400 kubelet[2603]: I0517 00:50:28.256348 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.256400 kubelet[2603]: I0517 00:50:28.256364 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/585eba96ef8976ec6422800bdb906114-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-ce3994935d\" (UID: \"585eba96ef8976ec6422800bdb906114\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" May 17 00:50:28.260595 kubelet[2603]: I0517 00:50:28.260565 2603 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:28.260728 kubelet[2603]: I0517 00:50:28.260684 2603 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-ce3994935d" May 17 00:50:28.605051 sudo[2635]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:50:28.605395 sudo[2635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:50:29.027392 kubelet[2603]: I0517 00:50:29.027350 2603 apiserver.go:52] "Watching apiserver" May 17 00:50:29.054537 kubelet[2603]: I0517 00:50:29.054503 2603 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:50:29.076176 sudo[2635]: pam_unix(sudo:session): session closed for user root May 17 00:50:29.119854 kubelet[2603]: W0517 00:50:29.119825 2603 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:50:29.120065 kubelet[2603]: E0517 00:50:29.120046 2603 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-ce3994935d\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" May 17 00:50:29.142666 kubelet[2603]: I0517 00:50:29.142598 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-ce3994935d" podStartSLOduration=1.142580382 podStartE2EDuration="1.142580382s" podCreationTimestamp="2025-05-17 00:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:29.129657678 +0000 UTC m=+1.156058265" watchObservedRunningTime="2025-05-17 00:50:29.142580382 +0000 UTC m=+1.168980969" May 17 00:50:29.154904 kubelet[2603]: I0517 00:50:29.154851 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-ce3994935d" podStartSLOduration=1.154836322 podStartE2EDuration="1.154836322s" podCreationTimestamp="2025-05-17 00:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:29.15441604 +0000 UTC m=+1.180816627" watchObservedRunningTime="2025-05-17 00:50:29.154836322 +0000 UTC m=+1.181236869" May 17 00:50:29.155193 kubelet[2603]: I0517 00:50:29.155162 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-ce3994935d" podStartSLOduration=1.155154324 podStartE2EDuration="1.155154324s" podCreationTimestamp="2025-05-17 00:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:29.142992584 +0000 UTC m=+1.169393171" watchObservedRunningTime="2025-05-17 00:50:29.155154324 +0000 UTC m=+1.181554911" May 17 00:50:31.010556 sudo[1903]: pam_unix(sudo:session): session closed for user root May 17 00:50:31.103277 sshd[1899]: pam_unix(sshd:session): session closed for user core May 17 00:50:31.106066 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. May 17 00:50:31.106351 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:33736.service: Deactivated successfully. May 17 00:50:31.107159 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:50:31.108275 systemd-logind[1545]: Removed session 7. May 17 00:50:32.796835 kubelet[2603]: I0517 00:50:32.796811 2603 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:50:32.797553 env[1557]: time="2025-05-17T00:50:32.797504222Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:50:32.797971 kubelet[2603]: I0517 00:50:32.797953 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:50:33.585034 kubelet[2603]: I0517 00:50:33.584999 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-cgroup\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585246 kubelet[2603]: I0517 00:50:33.585230 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-lib-modules\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585321 kubelet[2603]: I0517 00:50:33.585309 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-xtables-lock\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585392 kubelet[2603]: I0517 00:50:33.585377 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hn2n\" (UniqueName: \"kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-kube-api-access-2hn2n\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585459 kubelet[2603]: I0517 00:50:33.585448 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-kernel\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585540 kubelet[2603]: I0517 00:50:33.585517 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a1ecfa6-b882-4cbc-a5d0-1bab6536d414-xtables-lock\") pod \"kube-proxy-gbrjd\" (UID: \"8a1ecfa6-b882-4cbc-a5d0-1bab6536d414\") " pod="kube-system/kube-proxy-gbrjd" May 17 00:50:33.585616 kubelet[2603]: I0517 00:50:33.585601 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-hostproc\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585718 kubelet[2603]: I0517 00:50:33.585704 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-etc-cni-netd\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585786 kubelet[2603]: I0517 00:50:33.585774 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-net\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.585865 kubelet[2603]: I0517 00:50:33.585852 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a1ecfa6-b882-4cbc-a5d0-1bab6536d414-lib-modules\") pod \"kube-proxy-gbrjd\" (UID: \"8a1ecfa6-b882-4cbc-a5d0-1bab6536d414\") " pod="kube-system/kube-proxy-gbrjd" May 17 00:50:33.585944 kubelet[2603]: I0517 00:50:33.585926 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdz9\" (UniqueName: \"kubernetes.io/projected/8a1ecfa6-b882-4cbc-a5d0-1bab6536d414-kube-api-access-wcdz9\") pod \"kube-proxy-gbrjd\" (UID: \"8a1ecfa6-b882-4cbc-a5d0-1bab6536d414\") " pod="kube-system/kube-proxy-gbrjd" May 17 00:50:33.586018 kubelet[2603]: I0517 00:50:33.586006 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-run\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.586091 kubelet[2603]: I0517 00:50:33.586078 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-config-path\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.586162 kubelet[2603]: I0517 00:50:33.586150 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9be6a26b-108a-4f42-a9e7-dea1f7181291-clustermesh-secrets\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.586233 kubelet[2603]: I0517 00:50:33.586221 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a1ecfa6-b882-4cbc-a5d0-1bab6536d414-kube-proxy\") pod \"kube-proxy-gbrjd\" (UID: \"8a1ecfa6-b882-4cbc-a5d0-1bab6536d414\") " pod="kube-system/kube-proxy-gbrjd" May 17 00:50:33.586312 kubelet[2603]: I0517 00:50:33.586298 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-bpf-maps\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.586387 kubelet[2603]: I0517 00:50:33.586375 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-hubble-tls\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.586469 kubelet[2603]: I0517 00:50:33.586456 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cni-path\") pod \"cilium-vxkzb\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " pod="kube-system/cilium-vxkzb" May 17 00:50:33.691582 kubelet[2603]: I0517 00:50:33.691548 2603 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:50:33.849872 env[1557]: time="2025-05-17T00:50:33.849328984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gbrjd,Uid:8a1ecfa6-b882-4cbc-a5d0-1bab6536d414,Namespace:kube-system,Attempt:0,}" May 17 00:50:33.858440 env[1557]: time="2025-05-17T00:50:33.857924897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxkzb,Uid:9be6a26b-108a-4f42-a9e7-dea1f7181291,Namespace:kube-system,Attempt:0,}" May 17 00:50:33.922355 env[1557]: time="2025-05-17T00:50:33.921072696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:33.922355 env[1557]: time="2025-05-17T00:50:33.921107456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:33.922355 env[1557]: time="2025-05-17T00:50:33.921122016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:33.922355 env[1557]: time="2025-05-17T00:50:33.921233697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c974085097815ed7919e65cf08ce3423a5eedcce94b09638a72a1e06fcc33e3 pid=2691 runtime=io.containerd.runc.v2 May 17 00:50:33.923706 env[1557]: time="2025-05-17T00:50:33.922594062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:33.923706 env[1557]: time="2025-05-17T00:50:33.922668822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:33.923706 env[1557]: time="2025-05-17T00:50:33.922680742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:33.923706 env[1557]: time="2025-05-17T00:50:33.922848143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea pid=2690 runtime=io.containerd.runc.v2 May 17 00:50:33.984124 env[1557]: time="2025-05-17T00:50:33.983940495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gbrjd,Uid:8a1ecfa6-b882-4cbc-a5d0-1bab6536d414,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c974085097815ed7919e65cf08ce3423a5eedcce94b09638a72a1e06fcc33e3\"" May 17 00:50:33.985296 env[1557]: time="2025-05-17T00:50:33.985266740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vxkzb,Uid:9be6a26b-108a-4f42-a9e7-dea1f7181291,Namespace:kube-system,Attempt:0,} returns sandbox id \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\"" May 17 00:50:33.988400 env[1557]: time="2025-05-17T00:50:33.988328791Z" level=info msg="CreateContainer within sandbox \"8c974085097815ed7919e65cf08ce3423a5eedcce94b09638a72a1e06fcc33e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:50:33.988881 env[1557]: time="2025-05-17T00:50:33.988857633Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:50:33.993191 kubelet[2603]: I0517 00:50:33.993146 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb2b49a7-c400-4835-861f-4b4a7927dd12-cilium-config-path\") pod \"cilium-operator-5d85765b45-vm7h6\" (UID: \"bb2b49a7-c400-4835-861f-4b4a7927dd12\") " pod="kube-system/cilium-operator-5d85765b45-vm7h6" May 17 00:50:33.993459 kubelet[2603]: I0517 00:50:33.993200 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssz9n\" (UniqueName: \"kubernetes.io/projected/bb2b49a7-c400-4835-861f-4b4a7927dd12-kube-api-access-ssz9n\") pod \"cilium-operator-5d85765b45-vm7h6\" (UID: \"bb2b49a7-c400-4835-861f-4b4a7927dd12\") " pod="kube-system/cilium-operator-5d85765b45-vm7h6" May 17 00:50:34.029984 env[1557]: time="2025-05-17T00:50:34.029942383Z" level=info msg="CreateContainer within sandbox \"8c974085097815ed7919e65cf08ce3423a5eedcce94b09638a72a1e06fcc33e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2202f89b27e0440744a7f66792ef1440e7b493984cccb484ab958c2f98f5a8f2\"" May 17 00:50:34.032167 env[1557]: time="2025-05-17T00:50:34.032134150Z" level=info msg="StartContainer for \"2202f89b27e0440744a7f66792ef1440e7b493984cccb484ab958c2f98f5a8f2\"" May 17 00:50:34.090674 env[1557]: time="2025-05-17T00:50:34.089704995Z" level=info msg="StartContainer for \"2202f89b27e0440744a7f66792ef1440e7b493984cccb484ab958c2f98f5a8f2\" returns successfully" May 17 00:50:34.208405 env[1557]: time="2025-05-17T00:50:34.208367257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vm7h6,Uid:bb2b49a7-c400-4835-861f-4b4a7927dd12,Namespace:kube-system,Attempt:0,}" May 17 00:50:34.246743 env[1557]: time="2025-05-17T00:50:34.246523153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:34.246743 env[1557]: time="2025-05-17T00:50:34.246559113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:34.246743 env[1557]: time="2025-05-17T00:50:34.246568713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:34.246926 env[1557]: time="2025-05-17T00:50:34.246774434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37 pid=2839 runtime=io.containerd.runc.v2 May 17 00:50:34.286417 env[1557]: time="2025-05-17T00:50:34.286366095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vm7h6,Uid:bb2b49a7-c400-4835-861f-4b4a7927dd12,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37\"" May 17 00:50:35.519394 kubelet[2603]: I0517 00:50:35.519339 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gbrjd" podStartSLOduration=2.519319245 podStartE2EDuration="2.519319245s" podCreationTimestamp="2025-05-17 00:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:34.128412493 +0000 UTC m=+6.154813080" watchObservedRunningTime="2025-05-17 00:50:35.519319245 +0000 UTC m=+7.545719832" May 17 00:50:39.637249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860535220.mount: Deactivated successfully. May 17 00:50:42.475991 env[1557]: time="2025-05-17T00:50:42.475945958Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:42.488716 env[1557]: time="2025-05-17T00:50:42.488671505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:42.495720 env[1557]: time="2025-05-17T00:50:42.495683920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:42.496240 env[1557]: time="2025-05-17T00:50:42.496211481Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:50:42.498301 env[1557]: time="2025-05-17T00:50:42.498253885Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:50:42.499289 env[1557]: time="2025-05-17T00:50:42.499261807Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:50:42.530840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196348517.mount: Deactivated successfully. May 17 00:50:42.537353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268385741.mount: Deactivated successfully. May 17 00:50:42.551324 env[1557]: time="2025-05-17T00:50:42.551282678Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\"" May 17 00:50:42.552100 env[1557]: time="2025-05-17T00:50:42.552062399Z" level=info msg="StartContainer for \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\"" May 17 00:50:42.603836 env[1557]: time="2025-05-17T00:50:42.603788429Z" level=info msg="StartContainer for \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\" returns successfully" May 17 00:50:43.528266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8-rootfs.mount: Deactivated successfully. May 17 00:50:44.092362 env[1557]: time="2025-05-17T00:50:44.092316192Z" level=info msg="shim disconnected" id=19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8 May 17 00:50:44.092362 env[1557]: time="2025-05-17T00:50:44.092361232Z" level=warning msg="cleaning up after shim disconnected" id=19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8 namespace=k8s.io May 17 00:50:44.092362 env[1557]: time="2025-05-17T00:50:44.092369872Z" level=info msg="cleaning up dead shim" May 17 00:50:44.099449 env[1557]: time="2025-05-17T00:50:44.099407165Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3013 runtime=io.containerd.runc.v2\n" May 17 00:50:44.136998 env[1557]: time="2025-05-17T00:50:44.136956395Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:50:44.166267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746829814.mount: Deactivated successfully. May 17 00:50:44.173319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581402720.mount: Deactivated successfully. May 17 00:50:44.182499 env[1557]: time="2025-05-17T00:50:44.182455680Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\"" May 17 00:50:44.183154 env[1557]: time="2025-05-17T00:50:44.183125841Z" level=info msg="StartContainer for \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\"" May 17 00:50:44.231944 env[1557]: time="2025-05-17T00:50:44.229045847Z" level=info msg="StartContainer for \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\" returns successfully" May 17 00:50:44.234549 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:50:44.234812 systemd[1]: Stopped systemd-sysctl.service. May 17 00:50:44.234982 systemd[1]: Stopping systemd-sysctl.service... May 17 00:50:44.237315 systemd[1]: Starting systemd-sysctl.service... May 17 00:50:44.247958 systemd[1]: Finished systemd-sysctl.service. May 17 00:50:44.282041 env[1557]: time="2025-05-17T00:50:44.282003466Z" level=info msg="shim disconnected" id=e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a May 17 00:50:44.282233 env[1557]: time="2025-05-17T00:50:44.282215906Z" level=warning msg="cleaning up after shim disconnected" id=e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a namespace=k8s.io May 17 00:50:44.282310 env[1557]: time="2025-05-17T00:50:44.282297266Z" level=info msg="cleaning up dead shim" May 17 00:50:44.289114 env[1557]: time="2025-05-17T00:50:44.289081359Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3076 runtime=io.containerd.runc.v2\n" May 17 00:50:45.124987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004835198.mount: Deactivated successfully. May 17 00:50:45.155758 env[1557]: time="2025-05-17T00:50:45.155689278Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:50:45.240988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585404108.mount: Deactivated successfully. May 17 00:50:45.272125 env[1557]: time="2025-05-17T00:50:45.272082441Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\"" May 17 00:50:45.273918 env[1557]: time="2025-05-17T00:50:45.273696764Z" level=info msg="StartContainer for \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\"" May 17 00:50:45.331734 env[1557]: time="2025-05-17T00:50:45.331681505Z" level=info msg="StartContainer for \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\" returns successfully" May 17 00:50:45.379847 env[1557]: time="2025-05-17T00:50:45.379453949Z" level=info msg="shim disconnected" id=2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f May 17 00:50:45.379847 env[1557]: time="2025-05-17T00:50:45.379502909Z" level=warning msg="cleaning up after shim disconnected" id=2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f namespace=k8s.io May 17 00:50:45.379847 env[1557]: time="2025-05-17T00:50:45.379511069Z" level=info msg="cleaning up dead shim" May 17 00:50:45.387422 env[1557]: time="2025-05-17T00:50:45.387375043Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n" May 17 00:50:45.866710 env[1557]: time="2025-05-17T00:50:45.866669841Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:45.877700 env[1557]: time="2025-05-17T00:50:45.877663660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:45.883304 env[1557]: time="2025-05-17T00:50:45.883264270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:45.883864 env[1557]: time="2025-05-17T00:50:45.883832391Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:50:45.886468 env[1557]: time="2025-05-17T00:50:45.886434275Z" level=info msg="CreateContainer within sandbox \"a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:50:45.918115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount566192828.mount: Deactivated successfully. May 17 00:50:45.925230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213901293.mount: Deactivated successfully. May 17 00:50:45.955718 env[1557]: time="2025-05-17T00:50:45.955659076Z" level=info msg="CreateContainer within sandbox \"a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\"" May 17 00:50:45.957590 env[1557]: time="2025-05-17T00:50:45.956421598Z" level=info msg="StartContainer for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\"" May 17 00:50:46.009012 env[1557]: time="2025-05-17T00:50:46.008965689Z" level=info msg="StartContainer for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" returns successfully" May 17 00:50:46.160363 env[1557]: time="2025-05-17T00:50:46.159940016Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:50:46.197160 env[1557]: time="2025-05-17T00:50:46.197108397Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\"" May 17 00:50:46.199068 env[1557]: time="2025-05-17T00:50:46.199042721Z" level=info msg="StartContainer for \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\"" May 17 00:50:46.224078 kubelet[2603]: I0517 00:50:46.223943 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vm7h6" podStartSLOduration=1.6267915880000001 podStartE2EDuration="13.223916841s" podCreationTimestamp="2025-05-17 00:50:33 +0000 UTC" firstStartedPulling="2025-05-17 00:50:34.287541579 +0000 UTC m=+6.313942166" lastFinishedPulling="2025-05-17 00:50:45.884666832 +0000 UTC m=+17.911067419" observedRunningTime="2025-05-17 00:50:46.223690161 +0000 UTC m=+18.250090748" watchObservedRunningTime="2025-05-17 00:50:46.223916841 +0000 UTC m=+18.250317428" May 17 00:50:46.267381 env[1557]: time="2025-05-17T00:50:46.267335232Z" level=info msg="StartContainer for \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\" returns successfully" May 17 00:50:46.602794 env[1557]: time="2025-05-17T00:50:46.602751702Z" level=info msg="shim disconnected" id=15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a May 17 00:50:46.603034 env[1557]: time="2025-05-17T00:50:46.603016783Z" level=warning msg="cleaning up after shim disconnected" id=15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a namespace=k8s.io May 17 00:50:46.603118 env[1557]: time="2025-05-17T00:50:46.603104423Z" level=info msg="cleaning up dead shim" May 17 00:50:46.628515 env[1557]: time="2025-05-17T00:50:46.628465425Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3223 runtime=io.containerd.runc.v2\n" May 17 00:50:47.172303 env[1557]: time="2025-05-17T00:50:47.172240898Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:50:47.223222 env[1557]: time="2025-05-17T00:50:47.223156937Z" level=info msg="CreateContainer within sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\"" May 17 00:50:47.224096 env[1557]: time="2025-05-17T00:50:47.224068098Z" level=info msg="StartContainer for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\"" May 17 00:50:47.311206 env[1557]: time="2025-05-17T00:50:47.311148112Z" level=info msg="StartContainer for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" returns successfully" May 17 00:50:47.395706 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:50:47.475480 kubelet[2603]: I0517 00:50:47.474475 2603 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:50:47.528445 systemd[1]: run-containerd-runc-k8s.io-cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396-runc.03fSyo.mount: Deactivated successfully. May 17 00:50:47.579107 kubelet[2603]: I0517 00:50:47.579071 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxbs\" (UniqueName: \"kubernetes.io/projected/efd1c8dd-1a3d-482a-abca-40f409dd62dc-kube-api-access-nfxbs\") pod \"coredns-7c65d6cfc9-5lgq4\" (UID: \"efd1c8dd-1a3d-482a-abca-40f409dd62dc\") " pod="kube-system/coredns-7c65d6cfc9-5lgq4" May 17 00:50:47.579326 kubelet[2603]: I0517 00:50:47.579308 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc93f92d-7c1f-44d1-bccf-986de9db9264-config-volume\") pod \"coredns-7c65d6cfc9-w2h4c\" (UID: \"cc93f92d-7c1f-44d1-bccf-986de9db9264\") " pod="kube-system/coredns-7c65d6cfc9-w2h4c" May 17 00:50:47.579448 kubelet[2603]: I0517 00:50:47.579434 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvpn\" (UniqueName: \"kubernetes.io/projected/cc93f92d-7c1f-44d1-bccf-986de9db9264-kube-api-access-cbvpn\") pod \"coredns-7c65d6cfc9-w2h4c\" (UID: \"cc93f92d-7c1f-44d1-bccf-986de9db9264\") " pod="kube-system/coredns-7c65d6cfc9-w2h4c" May 17 00:50:47.579561 kubelet[2603]: I0517 00:50:47.579548 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efd1c8dd-1a3d-482a-abca-40f409dd62dc-config-volume\") pod \"coredns-7c65d6cfc9-5lgq4\" (UID: \"efd1c8dd-1a3d-482a-abca-40f409dd62dc\") " pod="kube-system/coredns-7c65d6cfc9-5lgq4" May 17 00:50:47.835716 env[1557]: time="2025-05-17T00:50:47.835555838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2h4c,Uid:cc93f92d-7c1f-44d1-bccf-986de9db9264,Namespace:kube-system,Attempt:0,}" May 17 00:50:47.837558 env[1557]: time="2025-05-17T00:50:47.837522961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5lgq4,Uid:efd1c8dd-1a3d-482a-abca-40f409dd62dc,Namespace:kube-system,Attempt:0,}" May 17 00:50:47.884648 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:50:48.199614 kubelet[2603]: I0517 00:50:48.198825 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vxkzb" podStartSLOduration=6.689706406 podStartE2EDuration="15.198807137s" podCreationTimestamp="2025-05-17 00:50:33 +0000 UTC" firstStartedPulling="2025-05-17 00:50:33.988377432 +0000 UTC m=+6.014778019" lastFinishedPulling="2025-05-17 00:50:42.497478163 +0000 UTC m=+14.523878750" observedRunningTime="2025-05-17 00:50:48.197801216 +0000 UTC m=+20.224201843" watchObservedRunningTime="2025-05-17 00:50:48.198807137 +0000 UTC m=+20.225207724" May 17 00:50:49.532972 systemd-networkd[1758]: cilium_host: Link UP May 17 00:50:49.533168 systemd-networkd[1758]: cilium_net: Link UP May 17 00:50:49.544237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:50:49.544334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:50:49.547359 systemd-networkd[1758]: cilium_net: Gained carrier May 17 00:50:49.547563 systemd-networkd[1758]: cilium_host: Gained carrier May 17 00:50:49.749463 systemd-networkd[1758]: cilium_vxlan: Link UP May 17 00:50:49.749470 systemd-networkd[1758]: cilium_vxlan: Gained carrier May 17 00:50:49.941727 systemd-networkd[1758]: cilium_host: Gained IPv6LL May 17 00:50:50.002653 kernel: NET: Registered PF_ALG protocol family May 17 00:50:50.198720 systemd-networkd[1758]: cilium_net: Gained IPv6LL May 17 00:50:50.626159 systemd-networkd[1758]: lxc_health: Link UP May 17 00:50:50.649062 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:50:50.644800 systemd-networkd[1758]: lxc_health: Gained carrier May 17 00:50:50.921526 systemd-networkd[1758]: lxc7ab80766ecb2: Link UP May 17 00:50:50.928687 kernel: eth0: renamed from tmp43f6d May 17 00:50:50.942703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7ab80766ecb2: link becomes ready May 17 00:50:50.942185 systemd-networkd[1758]: lxc7ab80766ecb2: Gained carrier May 17 00:50:50.953392 systemd-networkd[1758]: lxc44fffdd07ea0: Link UP May 17 00:50:50.972850 kernel: eth0: renamed from tmp0c953 May 17 00:50:50.980108 systemd-networkd[1758]: lxc44fffdd07ea0: Gained carrier May 17 00:50:50.980793 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc44fffdd07ea0: link becomes ready May 17 00:50:51.093819 systemd-networkd[1758]: cilium_vxlan: Gained IPv6LL May 17 00:50:51.733793 systemd-networkd[1758]: lxc_health: Gained IPv6LL May 17 00:50:52.693781 systemd-networkd[1758]: lxc7ab80766ecb2: Gained IPv6LL May 17 00:50:52.694028 systemd-networkd[1758]: lxc44fffdd07ea0: Gained IPv6LL May 17 00:50:54.403158 env[1557]: time="2025-05-17T00:50:54.403086568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:54.404955 env[1557]: time="2025-05-17T00:50:54.404923970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:54.405098 env[1557]: time="2025-05-17T00:50:54.405076050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:54.405357 env[1557]: time="2025-05-17T00:50:54.405320570Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c953da2d516f40709ba421bd2b6fca46bd1fc7e7270fd632670df549a1d5d6f pid=3767 runtime=io.containerd.runc.v2 May 17 00:50:54.422375 env[1557]: time="2025-05-17T00:50:54.422318507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:54.422551 env[1557]: time="2025-05-17T00:50:54.422528427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:54.422676 env[1557]: time="2025-05-17T00:50:54.422654027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:54.422985 env[1557]: time="2025-05-17T00:50:54.422942747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43f6dd5b796d4b8b1d05339f7701fe8e2c2be880e606ec590efa9e8a9239d94e pid=3788 runtime=io.containerd.runc.v2 May 17 00:50:54.456154 systemd[1]: run-containerd-runc-k8s.io-43f6dd5b796d4b8b1d05339f7701fe8e2c2be880e606ec590efa9e8a9239d94e-runc.bchZuK.mount: Deactivated successfully. May 17 00:50:54.506419 env[1557]: time="2025-05-17T00:50:54.506290749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5lgq4,Uid:efd1c8dd-1a3d-482a-abca-40f409dd62dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c953da2d516f40709ba421bd2b6fca46bd1fc7e7270fd632670df549a1d5d6f\"" May 17 00:50:54.511239 env[1557]: time="2025-05-17T00:50:54.511205274Z" level=info msg="CreateContainer within sandbox \"0c953da2d516f40709ba421bd2b6fca46bd1fc7e7270fd632670df549a1d5d6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:50:54.520084 env[1557]: time="2025-05-17T00:50:54.520029642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w2h4c,Uid:cc93f92d-7c1f-44d1-bccf-986de9db9264,Namespace:kube-system,Attempt:0,} returns sandbox id \"43f6dd5b796d4b8b1d05339f7701fe8e2c2be880e606ec590efa9e8a9239d94e\"" May 17 00:50:54.525341 env[1557]: time="2025-05-17T00:50:54.525283807Z" level=info msg="CreateContainer within sandbox \"43f6dd5b796d4b8b1d05339f7701fe8e2c2be880e606ec590efa9e8a9239d94e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:50:54.570949 env[1557]: time="2025-05-17T00:50:54.570870532Z" level=info msg="CreateContainer within sandbox \"0c953da2d516f40709ba421bd2b6fca46bd1fc7e7270fd632670df549a1d5d6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f28a6fee8a9810f0013ba3d20cf9164d80b8dc659c7d24ebc95bd1612fae8a6\"" May 17 00:50:54.571873 env[1557]: time="2025-05-17T00:50:54.571841773Z" level=info msg="StartContainer for \"2f28a6fee8a9810f0013ba3d20cf9164d80b8dc659c7d24ebc95bd1612fae8a6\"" May 17 00:50:54.575964 env[1557]: time="2025-05-17T00:50:54.575873457Z" level=info msg="CreateContainer within sandbox \"43f6dd5b796d4b8b1d05339f7701fe8e2c2be880e606ec590efa9e8a9239d94e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f6c25053264c6f2cccb1608801d6bcce7c3a69044641eef9b80fab0db8fcbde\"" May 17 00:50:54.577841 env[1557]: time="2025-05-17T00:50:54.577801579Z" level=info msg="StartContainer for \"3f6c25053264c6f2cccb1608801d6bcce7c3a69044641eef9b80fab0db8fcbde\"" May 17 00:50:54.648651 env[1557]: time="2025-05-17T00:50:54.645771405Z" level=info msg="StartContainer for \"2f28a6fee8a9810f0013ba3d20cf9164d80b8dc659c7d24ebc95bd1612fae8a6\" returns successfully" May 17 00:50:54.648651 env[1557]: time="2025-05-17T00:50:54.646392046Z" level=info msg="StartContainer for \"3f6c25053264c6f2cccb1608801d6bcce7c3a69044641eef9b80fab0db8fcbde\" returns successfully" May 17 00:50:55.078215 kubelet[2603]: I0517 00:50:55.078172 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:50:55.224543 kubelet[2603]: I0517 00:50:55.224480 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5lgq4" podStartSLOduration=22.224461398 podStartE2EDuration="22.224461398s" podCreationTimestamp="2025-05-17 00:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:55.20474934 +0000 UTC m=+27.231149887" watchObservedRunningTime="2025-05-17 00:50:55.224461398 +0000 UTC m=+27.250861985" May 17 00:50:55.245162 kubelet[2603]: I0517 00:50:55.245094 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w2h4c" podStartSLOduration=22.245074697 podStartE2EDuration="22.245074697s" podCreationTimestamp="2025-05-17 00:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:55.226021279 +0000 UTC m=+27.252421866" watchObservedRunningTime="2025-05-17 00:50:55.245074697 +0000 UTC m=+27.271475284" May 17 00:52:32.134647 systemd[1]: Started sshd@5-10.200.20.21:22-10.200.16.10:58776.service. May 17 00:52:32.610652 sshd[3932]: Accepted publickey for core from 10.200.16.10 port 58776 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:32.612320 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:32.617048 systemd[1]: Started session-8.scope. May 17 00:52:32.617710 systemd-logind[1545]: New session 8 of user core. May 17 00:52:33.099897 sshd[3932]: pam_unix(sshd:session): session closed for user core May 17 00:52:33.102349 systemd[1]: sshd@5-10.200.20.21:22-10.200.16.10:58776.service: Deactivated successfully. May 17 00:52:33.103579 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. May 17 00:52:33.104194 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:52:33.105018 systemd-logind[1545]: Removed session 8. May 17 00:52:38.179647 systemd[1]: Started sshd@6-10.200.20.21:22-10.200.16.10:58792.service. May 17 00:52:38.661311 sshd[3947]: Accepted publickey for core from 10.200.16.10 port 58792 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:38.662976 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:38.668281 systemd[1]: Started session-9.scope. May 17 00:52:38.669194 systemd-logind[1545]: New session 9 of user core. May 17 00:52:39.071469 sshd[3947]: pam_unix(sshd:session): session closed for user core May 17 00:52:39.073894 systemd[1]: sshd@6-10.200.20.21:22-10.200.16.10:58792.service: Deactivated successfully. May 17 00:52:39.074693 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:52:39.075777 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. May 17 00:52:39.076496 systemd-logind[1545]: Removed session 9. May 17 00:52:44.145240 systemd[1]: Started sshd@7-10.200.20.21:22-10.200.16.10:48746.service. May 17 00:52:44.597031 sshd[3961]: Accepted publickey for core from 10.200.16.10 port 48746 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:44.598267 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:44.602565 systemd[1]: Started session-10.scope. May 17 00:52:44.602762 systemd-logind[1545]: New session 10 of user core. May 17 00:52:44.994106 sshd[3961]: pam_unix(sshd:session): session closed for user core May 17 00:52:44.996785 systemd[1]: sshd@7-10.200.20.21:22-10.200.16.10:48746.service: Deactivated successfully. May 17 00:52:44.996943 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. May 17 00:52:44.997544 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:52:44.998298 systemd-logind[1545]: Removed session 10. May 17 00:52:50.067994 systemd[1]: Started sshd@8-10.200.20.21:22-10.200.16.10:56124.service. May 17 00:52:50.518093 sshd[3975]: Accepted publickey for core from 10.200.16.10 port 56124 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:50.519422 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:50.524021 systemd[1]: Started session-11.scope. May 17 00:52:50.524502 systemd-logind[1545]: New session 11 of user core. May 17 00:52:50.922223 sshd[3975]: pam_unix(sshd:session): session closed for user core May 17 00:52:50.926000 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. May 17 00:52:50.926158 systemd[1]: sshd@8-10.200.20.21:22-10.200.16.10:56124.service: Deactivated successfully. May 17 00:52:50.927028 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:52:50.927464 systemd-logind[1545]: Removed session 11. May 17 00:52:50.996086 systemd[1]: Started sshd@9-10.200.20.21:22-10.200.16.10:56128.service. May 17 00:52:51.446715 sshd[3988]: Accepted publickey for core from 10.200.16.10 port 56128 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:51.447956 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:51.452130 systemd-logind[1545]: New session 12 of user core. May 17 00:52:51.452365 systemd[1]: Started session-12.scope. May 17 00:52:51.885332 sshd[3988]: pam_unix(sshd:session): session closed for user core May 17 00:52:51.888300 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. May 17 00:52:51.888429 systemd[1]: sshd@9-10.200.20.21:22-10.200.16.10:56128.service: Deactivated successfully. May 17 00:52:51.889252 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:52:51.889685 systemd-logind[1545]: Removed session 12. May 17 00:52:51.963749 systemd[1]: Started sshd@10-10.200.20.21:22-10.200.16.10:56142.service. May 17 00:52:52.440100 sshd[4000]: Accepted publickey for core from 10.200.16.10 port 56142 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:52.441342 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:52.445305 systemd-logind[1545]: New session 13 of user core. May 17 00:52:52.445744 systemd[1]: Started session-13.scope. May 17 00:52:52.861354 sshd[4000]: pam_unix(sshd:session): session closed for user core May 17 00:52:52.863933 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. May 17 00:52:52.864763 systemd[1]: sshd@10-10.200.20.21:22-10.200.16.10:56142.service: Deactivated successfully. May 17 00:52:52.865544 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:52:52.866285 systemd-logind[1545]: Removed session 13. May 17 00:52:57.934461 systemd[1]: Started sshd@11-10.200.20.21:22-10.200.16.10:56156.service. May 17 00:52:58.380607 sshd[4013]: Accepted publickey for core from 10.200.16.10 port 56156 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:58.382239 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:58.386103 systemd-logind[1545]: New session 14 of user core. May 17 00:52:58.386540 systemd[1]: Started session-14.scope. May 17 00:52:58.786854 sshd[4013]: pam_unix(sshd:session): session closed for user core May 17 00:52:58.789138 systemd[1]: sshd@11-10.200.20.21:22-10.200.16.10:56156.service: Deactivated successfully. May 17 00:52:58.790738 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:52:58.791221 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. May 17 00:52:58.792009 systemd-logind[1545]: Removed session 14. May 17 00:53:03.860731 systemd[1]: Started sshd@12-10.200.20.21:22-10.200.16.10:60622.service. May 17 00:53:04.311443 sshd[4025]: Accepted publickey for core from 10.200.16.10 port 60622 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:04.312749 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:04.316508 systemd-logind[1545]: New session 15 of user core. May 17 00:53:04.316945 systemd[1]: Started session-15.scope. May 17 00:53:04.717071 sshd[4025]: pam_unix(sshd:session): session closed for user core May 17 00:53:04.719660 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. May 17 00:53:04.719792 systemd[1]: sshd@12-10.200.20.21:22-10.200.16.10:60622.service: Deactivated successfully. May 17 00:53:04.720579 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:53:04.721023 systemd-logind[1545]: Removed session 15. May 17 00:53:04.795444 systemd[1]: Started sshd@13-10.200.20.21:22-10.200.16.10:60632.service. May 17 00:53:05.274995 sshd[4040]: Accepted publickey for core from 10.200.16.10 port 60632 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:05.276598 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:05.280596 systemd-logind[1545]: New session 16 of user core. May 17 00:53:05.281059 systemd[1]: Started session-16.scope. May 17 00:53:05.720324 sshd[4040]: pam_unix(sshd:session): session closed for user core May 17 00:53:05.722992 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. May 17 00:53:05.724072 systemd[1]: sshd@13-10.200.20.21:22-10.200.16.10:60632.service: Deactivated successfully. May 17 00:53:05.724956 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:53:05.725712 systemd-logind[1545]: Removed session 16. May 17 00:53:05.794395 systemd[1]: Started sshd@14-10.200.20.21:22-10.200.16.10:60642.service. May 17 00:53:06.247174 sshd[4050]: Accepted publickey for core from 10.200.16.10 port 60642 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:06.248468 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:06.252455 systemd-logind[1545]: New session 17 of user core. May 17 00:53:06.252896 systemd[1]: Started session-17.scope. May 17 00:53:08.044335 sshd[4050]: pam_unix(sshd:session): session closed for user core May 17 00:53:08.047338 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. May 17 00:53:08.047906 systemd[1]: sshd@14-10.200.20.21:22-10.200.16.10:60642.service: Deactivated successfully. May 17 00:53:08.048685 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:53:08.049927 systemd-logind[1545]: Removed session 17. May 17 00:53:08.116545 systemd[1]: Started sshd@15-10.200.20.21:22-10.200.16.10:60644.service. May 17 00:53:08.562469 sshd[4068]: Accepted publickey for core from 10.200.16.10 port 60644 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:08.563734 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:08.569083 systemd-logind[1545]: New session 18 of user core. May 17 00:53:08.569724 systemd[1]: Started session-18.scope. May 17 00:53:09.063320 sshd[4068]: pam_unix(sshd:session): session closed for user core May 17 00:53:09.066185 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. May 17 00:53:09.067146 systemd[1]: sshd@15-10.200.20.21:22-10.200.16.10:60644.service: Deactivated successfully. May 17 00:53:09.067919 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:53:09.068619 systemd-logind[1545]: Removed session 18. May 17 00:53:09.137305 systemd[1]: Started sshd@16-10.200.20.21:22-10.200.16.10:38402.service. May 17 00:53:09.584366 sshd[4079]: Accepted publickey for core from 10.200.16.10 port 38402 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:09.585614 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:09.590134 systemd[1]: Started session-19.scope. May 17 00:53:09.590316 systemd-logind[1545]: New session 19 of user core. May 17 00:53:09.982216 sshd[4079]: pam_unix(sshd:session): session closed for user core May 17 00:53:09.985011 systemd[1]: sshd@16-10.200.20.21:22-10.200.16.10:38402.service: Deactivated successfully. May 17 00:53:09.985201 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. May 17 00:53:09.985825 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:53:09.986533 systemd-logind[1545]: Removed session 19. May 17 00:53:15.056476 systemd[1]: Started sshd@17-10.200.20.21:22-10.200.16.10:38408.service. May 17 00:53:15.508774 sshd[4095]: Accepted publickey for core from 10.200.16.10 port 38408 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:15.510359 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:15.514648 systemd[1]: Started session-20.scope. May 17 00:53:15.515116 systemd-logind[1545]: New session 20 of user core. May 17 00:53:15.900319 sshd[4095]: pam_unix(sshd:session): session closed for user core May 17 00:53:15.903200 systemd[1]: sshd@17-10.200.20.21:22-10.200.16.10:38408.service: Deactivated successfully. May 17 00:53:15.904582 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:53:15.905135 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. May 17 00:53:15.905877 systemd-logind[1545]: Removed session 20. May 17 00:53:20.979327 systemd[1]: Started sshd@18-10.200.20.21:22-10.200.16.10:53124.service. May 17 00:53:21.456415 sshd[4108]: Accepted publickey for core from 10.200.16.10 port 53124 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:21.458016 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:21.462255 systemd[1]: Started session-21.scope. May 17 00:53:21.462688 systemd-logind[1545]: New session 21 of user core. May 17 00:53:21.870577 sshd[4108]: pam_unix(sshd:session): session closed for user core May 17 00:53:21.873254 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. May 17 00:53:21.873400 systemd[1]: sshd@18-10.200.20.21:22-10.200.16.10:53124.service: Deactivated successfully. May 17 00:53:21.874215 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:53:21.874650 systemd-logind[1545]: Removed session 21. May 17 00:53:26.943389 systemd[1]: Started sshd@19-10.200.20.21:22-10.200.16.10:53128.service. May 17 00:53:27.387979 sshd[4121]: Accepted publickey for core from 10.200.16.10 port 53128 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:27.389200 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:27.393105 systemd-logind[1545]: New session 22 of user core. May 17 00:53:27.393499 systemd[1]: Started session-22.scope. May 17 00:53:27.790832 sshd[4121]: pam_unix(sshd:session): session closed for user core May 17 00:53:27.793211 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. May 17 00:53:27.793524 systemd[1]: sshd@19-10.200.20.21:22-10.200.16.10:53128.service: Deactivated successfully. May 17 00:53:27.794281 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:53:27.795339 systemd-logind[1545]: Removed session 22. May 17 00:53:27.868926 systemd[1]: Started sshd@20-10.200.20.21:22-10.200.16.10:53136.service. May 17 00:53:28.350097 sshd[4133]: Accepted publickey for core from 10.200.16.10 port 53136 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:28.351383 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:28.355175 systemd-logind[1545]: New session 23 of user core. May 17 00:53:28.355994 systemd[1]: Started session-23.scope. May 17 00:53:31.228215 systemd[1]: run-containerd-runc-k8s.io-cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396-runc.Wjzn0K.mount: Deactivated successfully. May 17 00:53:31.234522 env[1557]: time="2025-05-17T00:53:31.234478660Z" level=info msg="StopContainer for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" with timeout 30 (s)" May 17 00:53:31.238040 env[1557]: time="2025-05-17T00:53:31.237982810Z" level=info msg="Stop container \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" with signal terminated" May 17 00:53:31.248237 env[1557]: time="2025-05-17T00:53:31.248186029Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:53:31.254779 env[1557]: time="2025-05-17T00:53:31.254744541Z" level=info msg="StopContainer for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" with timeout 2 (s)" May 17 00:53:31.255372 env[1557]: time="2025-05-17T00:53:31.255349126Z" level=info msg="Stop container \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" with signal terminated" May 17 00:53:31.271706 systemd-networkd[1758]: lxc_health: Link DOWN May 17 00:53:31.271712 systemd-networkd[1758]: lxc_health: Lost carrier May 17 00:53:31.288380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52-rootfs.mount: Deactivated successfully. May 17 00:53:31.310901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396-rootfs.mount: Deactivated successfully. May 17 00:53:31.342148 env[1557]: time="2025-05-17T00:53:31.342105106Z" level=info msg="shim disconnected" id=cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396 May 17 00:53:31.342148 env[1557]: time="2025-05-17T00:53:31.342144945Z" level=warning msg="cleaning up after shim disconnected" id=cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396 namespace=k8s.io May 17 00:53:31.342148 env[1557]: time="2025-05-17T00:53:31.342154744Z" level=info msg="cleaning up dead shim" May 17 00:53:31.342401 env[1557]: time="2025-05-17T00:53:31.342096946Z" level=info msg="shim disconnected" id=0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52 May 17 00:53:31.342436 env[1557]: time="2025-05-17T00:53:31.342405578Z" level=warning msg="cleaning up after shim disconnected" id=0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52 namespace=k8s.io May 17 00:53:31.342436 env[1557]: time="2025-05-17T00:53:31.342418858Z" level=info msg="cleaning up dead shim" May 17 00:53:31.349731 env[1557]: time="2025-05-17T00:53:31.349683512Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4203 runtime=io.containerd.runc.v2\n" May 17 00:53:31.350994 env[1557]: time="2025-05-17T00:53:31.350960399Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4204 runtime=io.containerd.runc.v2\n" May 17 00:53:31.353818 env[1557]: time="2025-05-17T00:53:31.353783607Z" level=info msg="StopContainer for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" returns successfully" May 17 00:53:31.354339 env[1557]: time="2025-05-17T00:53:31.354309953Z" level=info msg="StopPodSandbox for \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\"" May 17 00:53:31.354408 env[1557]: time="2025-05-17T00:53:31.354369872Z" level=info msg="Container to stop \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:31.354408 env[1557]: time="2025-05-17T00:53:31.354383311Z" level=info msg="Container to stop \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:31.354408 env[1557]: time="2025-05-17T00:53:31.354394591Z" level=info msg="Container to stop \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:31.354619 env[1557]: time="2025-05-17T00:53:31.354405951Z" level=info msg="Container to stop \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:31.354619 env[1557]: time="2025-05-17T00:53:31.354417711Z" level=info msg="Container to stop \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:31.356446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea-shm.mount: Deactivated successfully. May 17 00:53:31.358445 env[1557]: time="2025-05-17T00:53:31.357699027Z" level=info msg="StopContainer for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" returns successfully" May 17 00:53:31.359135 env[1557]: time="2025-05-17T00:53:31.359104631Z" level=info msg="StopPodSandbox for \"a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37\"" May 17 00:53:31.359287 env[1557]: time="2025-05-17T00:53:31.359267186Z" level=info msg="Container to stop \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:31.403432 env[1557]: time="2025-05-17T00:53:31.403380738Z" level=info msg="shim disconnected" id=a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37 May 17 00:53:31.403679 env[1557]: time="2025-05-17T00:53:31.403661250Z" level=warning msg="cleaning up after shim disconnected" id=a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37 namespace=k8s.io May 17 00:53:31.403756 env[1557]: time="2025-05-17T00:53:31.403743208Z" level=info msg="cleaning up dead shim" May 17 00:53:31.404610 env[1557]: time="2025-05-17T00:53:31.404582347Z" level=info msg="shim disconnected" id=01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea May 17 00:53:31.404890 env[1557]: time="2025-05-17T00:53:31.404868500Z" level=warning msg="cleaning up after shim disconnected" id=01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea namespace=k8s.io May 17 00:53:31.405298 env[1557]: time="2025-05-17T00:53:31.405279169Z" level=info msg="cleaning up dead shim" May 17 00:53:31.414763 env[1557]: time="2025-05-17T00:53:31.414705208Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4270 runtime=io.containerd.runc.v2\n" May 17 00:53:31.415777 env[1557]: time="2025-05-17T00:53:31.415747861Z" level=info msg="TearDown network for sandbox \"a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37\" successfully" May 17 00:53:31.416125 env[1557]: time="2025-05-17T00:53:31.416103372Z" level=info msg="StopPodSandbox for \"a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37\" returns successfully" May 17 00:53:31.416292 env[1557]: time="2025-05-17T00:53:31.416085933Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4271 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:53:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 17 00:53:31.417829 env[1557]: time="2025-05-17T00:53:31.417805049Z" level=info msg="TearDown network for sandbox \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" successfully" May 17 00:53:31.417964 env[1557]: time="2025-05-17T00:53:31.417944365Z" level=info msg="StopPodSandbox for \"01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea\" returns successfully" May 17 00:53:31.477776 kubelet[2603]: I0517 00:53:31.477722 2603 scope.go:117] "RemoveContainer" containerID="cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396" May 17 00:53:31.479920 env[1557]: time="2025-05-17T00:53:31.479824982Z" level=info msg="RemoveContainer for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\"" May 17 00:53:31.491820 env[1557]: time="2025-05-17T00:53:31.491781996Z" level=info msg="RemoveContainer for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" returns successfully" May 17 00:53:31.492215 kubelet[2603]: I0517 00:53:31.492192 2603 scope.go:117] "RemoveContainer" containerID="15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a" May 17 00:53:31.493372 env[1557]: time="2025-05-17T00:53:31.493345276Z" level=info msg="RemoveContainer for \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\"" May 17 00:53:31.516118 env[1557]: time="2025-05-17T00:53:31.516077214Z" level=info msg="RemoveContainer for \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\" returns successfully" May 17 00:53:31.516511 kubelet[2603]: I0517 00:53:31.516491 2603 scope.go:117] "RemoveContainer" containerID="2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f" May 17 00:53:31.517746 env[1557]: time="2025-05-17T00:53:31.517715732Z" level=info msg="RemoveContainer for \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\"" May 17 00:53:31.527406 env[1557]: time="2025-05-17T00:53:31.527368285Z" level=info msg="RemoveContainer for \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\" returns successfully" May 17 00:53:31.527737 kubelet[2603]: I0517 00:53:31.527705 2603 scope.go:117] "RemoveContainer" containerID="e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a" May 17 00:53:31.528825 env[1557]: time="2025-05-17T00:53:31.528797408Z" level=info msg="RemoveContainer for \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\"" May 17 00:53:31.536418 env[1557]: time="2025-05-17T00:53:31.536385934Z" level=info msg="RemoveContainer for \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\" returns successfully" May 17 00:53:31.536596 kubelet[2603]: I0517 00:53:31.536567 2603 scope.go:117] "RemoveContainer" containerID="19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8" May 17 00:53:31.537592 env[1557]: time="2025-05-17T00:53:31.537566224Z" level=info msg="RemoveContainer for \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\"" May 17 00:53:31.547380 env[1557]: time="2025-05-17T00:53:31.547340014Z" level=info msg="RemoveContainer for \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\" returns successfully" May 17 00:53:31.547569 kubelet[2603]: I0517 00:53:31.547539 2603 scope.go:117] "RemoveContainer" containerID="cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396" May 17 00:53:31.547965 env[1557]: time="2025-05-17T00:53:31.547888200Z" level=error msg="ContainerStatus for \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\": not found" May 17 00:53:31.548094 kubelet[2603]: E0517 00:53:31.548065 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\": not found" containerID="cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396" May 17 00:53:31.548185 kubelet[2603]: I0517 00:53:31.548103 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396"} err="failed to get container status \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\": rpc error: code = NotFound desc = an error occurred when try to find container \"cebd8368015f07ea1111ece6741090d91184d3ec1156d047e87e647e643d4396\": not found" May 17 00:53:31.548229 kubelet[2603]: I0517 00:53:31.548185 2603 scope.go:117] "RemoveContainer" containerID="15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a" May 17 00:53:31.548403 env[1557]: time="2025-05-17T00:53:31.548351028Z" level=error msg="ContainerStatus for \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\": not found" May 17 00:53:31.548522 kubelet[2603]: E0517 00:53:31.548498 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\": not found" containerID="15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a" May 17 00:53:31.548565 kubelet[2603]: I0517 00:53:31.548526 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a"} err="failed to get container status \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\": rpc error: code = NotFound desc = an error occurred when try to find container \"15854036d56931ce17aedc5b29b35b76934c8f71f8b6118a2184630dc5bcf72a\": not found" May 17 00:53:31.548565 kubelet[2603]: I0517 00:53:31.548548 2603 scope.go:117] "RemoveContainer" containerID="2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f" May 17 00:53:31.548767 env[1557]: time="2025-05-17T00:53:31.548722579Z" level=error msg="ContainerStatus for \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\": not found" May 17 00:53:31.548890 kubelet[2603]: E0517 00:53:31.548867 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\": not found" containerID="2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f" May 17 00:53:31.548939 kubelet[2603]: I0517 00:53:31.548895 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f"} err="failed to get container status \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bc849f6fc26a4dff132d32fa062e1a0485649a129f603547004ad23a480622f\": not found" May 17 00:53:31.548939 kubelet[2603]: I0517 00:53:31.548911 2603 scope.go:117] "RemoveContainer" containerID="e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a" May 17 00:53:31.549108 env[1557]: time="2025-05-17T00:53:31.549058690Z" level=error msg="ContainerStatus for \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\": not found" May 17 00:53:31.549217 kubelet[2603]: E0517 00:53:31.549192 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\": not found" containerID="e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a" May 17 00:53:31.549260 kubelet[2603]: I0517 00:53:31.549222 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a"} err="failed to get container status \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7bbd9c2f2dd8c41b6d2382bf0970595e1bce3e8030ece9457edd202e536d36a\": not found" May 17 00:53:31.549260 kubelet[2603]: I0517 00:53:31.549237 2603 scope.go:117] "RemoveContainer" containerID="19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8" May 17 00:53:31.549415 env[1557]: time="2025-05-17T00:53:31.549365242Z" level=error msg="ContainerStatus for \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\": not found" May 17 00:53:31.549515 kubelet[2603]: E0517 00:53:31.549491 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\": not found" containerID="19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8" May 17 00:53:31.549557 kubelet[2603]: I0517 00:53:31.549525 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8"} err="failed to get container status \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"19cb168062aef68e37727c2b46c80a901d930ed0c100bf413ee7c2666c4ff7b8\": not found" May 17 00:53:31.549557 kubelet[2603]: I0517 00:53:31.549541 2603 scope.go:117] "RemoveContainer" containerID="0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52" May 17 00:53:31.550439 env[1557]: time="2025-05-17T00:53:31.550411295Z" level=info msg="RemoveContainer for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\"" May 17 00:53:31.557780 env[1557]: time="2025-05-17T00:53:31.557746828Z" level=info msg="RemoveContainer for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" returns successfully" May 17 00:53:31.557952 kubelet[2603]: I0517 00:53:31.557930 2603 scope.go:117] "RemoveContainer" containerID="0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52" May 17 00:53:31.558162 env[1557]: time="2025-05-17T00:53:31.558113058Z" level=error msg="ContainerStatus for \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\": not found" May 17 00:53:31.558301 kubelet[2603]: E0517 00:53:31.558283 2603 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\": not found" containerID="0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52" May 17 00:53:31.558458 kubelet[2603]: I0517 00:53:31.558440 2603 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52"} err="failed to get container status \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bfb1e61d3df77edd827f70e72ef98df45fcc415f422b0f1c4a06454b5e36c52\": not found" May 17 00:53:31.568941 kubelet[2603]: I0517 00:53:31.568900 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-hostproc\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569116 kubelet[2603]: I0517 00:53:31.569101 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-net\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569208 kubelet[2603]: I0517 00:53:31.568816 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-hostproc" (OuterVolumeSpecName: "hostproc") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.569301 kubelet[2603]: I0517 00:53:31.569288 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.569392 kubelet[2603]: I0517 00:53:31.569194 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-lib-modules\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569444 kubelet[2603]: I0517 00:53:31.569408 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hn2n\" (UniqueName: \"kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-kube-api-access-2hn2n\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569444 kubelet[2603]: I0517 00:53:31.569429 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cni-path\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569498 kubelet[2603]: I0517 00:53:31.569446 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-bpf-maps\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569498 kubelet[2603]: I0517 00:53:31.569464 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9be6a26b-108a-4f42-a9e7-dea1f7181291-clustermesh-secrets\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569498 kubelet[2603]: I0517 00:53:31.569481 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-etc-cni-netd\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569498 kubelet[2603]: I0517 00:53:31.569494 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-run\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569592 kubelet[2603]: I0517 00:53:31.569511 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-config-path\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569592 kubelet[2603]: I0517 00:53:31.569527 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb2b49a7-c400-4835-861f-4b4a7927dd12-cilium-config-path\") pod \"bb2b49a7-c400-4835-861f-4b4a7927dd12\" (UID: \"bb2b49a7-c400-4835-861f-4b4a7927dd12\") " May 17 00:53:31.569592 kubelet[2603]: I0517 00:53:31.569542 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-cgroup\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569592 kubelet[2603]: I0517 00:53:31.569556 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-kernel\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569592 kubelet[2603]: I0517 00:53:31.569571 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-hubble-tls\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569592 kubelet[2603]: I0517 00:53:31.569589 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssz9n\" (UniqueName: \"kubernetes.io/projected/bb2b49a7-c400-4835-861f-4b4a7927dd12-kube-api-access-ssz9n\") pod \"bb2b49a7-c400-4835-861f-4b4a7927dd12\" (UID: \"bb2b49a7-c400-4835-861f-4b4a7927dd12\") " May 17 00:53:31.569747 kubelet[2603]: I0517 00:53:31.569606 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-xtables-lock\") pod \"9be6a26b-108a-4f42-a9e7-dea1f7181291\" (UID: \"9be6a26b-108a-4f42-a9e7-dea1f7181291\") " May 17 00:53:31.569747 kubelet[2603]: I0517 00:53:31.569663 2603 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-lib-modules\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.569747 kubelet[2603]: I0517 00:53:31.569674 2603 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-hostproc\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.569747 kubelet[2603]: I0517 00:53:31.569182 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.569747 kubelet[2603]: I0517 00:53:31.569695 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.570419 kubelet[2603]: I0517 00:53:31.570377 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cni-path" (OuterVolumeSpecName: "cni-path") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.570481 kubelet[2603]: I0517 00:53:31.570423 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.570829 kubelet[2603]: I0517 00:53:31.570795 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.570905 kubelet[2603]: I0517 00:53:31.570834 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.573245 kubelet[2603]: I0517 00:53:31.573189 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.573322 kubelet[2603]: I0517 00:53:31.573252 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:31.574107 kubelet[2603]: I0517 00:53:31.574084 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-kube-api-access-2hn2n" (OuterVolumeSpecName: "kube-api-access-2hn2n") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "kube-api-access-2hn2n". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:31.575165 kubelet[2603]: I0517 00:53:31.575131 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb2b49a7-c400-4835-861f-4b4a7927dd12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb2b49a7-c400-4835-861f-4b4a7927dd12" (UID: "bb2b49a7-c400-4835-861f-4b4a7927dd12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:53:31.575234 kubelet[2603]: I0517 00:53:31.575199 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:31.575985 kubelet[2603]: I0517 00:53:31.575953 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:53:31.577190 kubelet[2603]: I0517 00:53:31.577150 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be6a26b-108a-4f42-a9e7-dea1f7181291-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9be6a26b-108a-4f42-a9e7-dea1f7181291" (UID: "9be6a26b-108a-4f42-a9e7-dea1f7181291"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:53:31.578104 kubelet[2603]: I0517 00:53:31.578080 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2b49a7-c400-4835-861f-4b4a7927dd12-kube-api-access-ssz9n" (OuterVolumeSpecName: "kube-api-access-ssz9n") pod "bb2b49a7-c400-4835-861f-4b4a7927dd12" (UID: "bb2b49a7-c400-4835-861f-4b4a7927dd12"). InnerVolumeSpecName "kube-api-access-ssz9n". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:31.670388 kubelet[2603]: I0517 00:53:31.670356 2603 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9be6a26b-108a-4f42-a9e7-dea1f7181291-clustermesh-secrets\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670567 kubelet[2603]: I0517 00:53:31.670554 2603 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-etc-cni-netd\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670650 kubelet[2603]: I0517 00:53:31.670616 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-run\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670717 kubelet[2603]: I0517 00:53:31.670707 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-cgroup\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670774 kubelet[2603]: I0517 00:53:31.670765 2603 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670875 kubelet[2603]: I0517 00:53:31.670863 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be6a26b-108a-4f42-a9e7-dea1f7181291-cilium-config-path\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670938 kubelet[2603]: I0517 00:53:31.670928 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb2b49a7-c400-4835-861f-4b4a7927dd12-cilium-config-path\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.670995 kubelet[2603]: I0517 00:53:31.670986 2603 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-xtables-lock\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.671047 kubelet[2603]: I0517 00:53:31.671038 2603 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-hubble-tls\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.671102 kubelet[2603]: I0517 00:53:31.671093 2603 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssz9n\" (UniqueName: \"kubernetes.io/projected/bb2b49a7-c400-4835-861f-4b4a7927dd12-kube-api-access-ssz9n\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.671157 kubelet[2603]: I0517 00:53:31.671148 2603 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hn2n\" (UniqueName: \"kubernetes.io/projected/9be6a26b-108a-4f42-a9e7-dea1f7181291-kube-api-access-2hn2n\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.671216 kubelet[2603]: I0517 00:53:31.671206 2603 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-host-proc-sys-net\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.671275 kubelet[2603]: I0517 00:53:31.671263 2603 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-cni-path\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:31.671326 kubelet[2603]: I0517 00:53:31.671318 2603 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9be6a26b-108a-4f42-a9e7-dea1f7181291-bpf-maps\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:32.085420 kubelet[2603]: I0517 00:53:32.085382 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" path="/var/lib/kubelet/pods/9be6a26b-108a-4f42-a9e7-dea1f7181291/volumes" May 17 00:53:32.086305 kubelet[2603]: I0517 00:53:32.086289 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb2b49a7-c400-4835-861f-4b4a7927dd12" path="/var/lib/kubelet/pods/bb2b49a7-c400-4835-861f-4b4a7927dd12/volumes" May 17 00:53:32.225143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37-rootfs.mount: Deactivated successfully. May 17 00:53:32.225298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3fdc0144968e4439abc535e20fb95d69bf8202e1ed05c7c3c64458ecc93cf37-shm.mount: Deactivated successfully. May 17 00:53:32.225387 systemd[1]: var-lib-kubelet-pods-bb2b49a7\x2dc400\x2d4835\x2d861f\x2d4b4a7927dd12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssz9n.mount: Deactivated successfully. May 17 00:53:32.225476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01eca398cf8240c68eb34ac99c93974e25a6f4f57592b0c0d6786cc9eb78e1ea-rootfs.mount: Deactivated successfully. May 17 00:53:32.225548 systemd[1]: var-lib-kubelet-pods-9be6a26b\x2d108a\x2d4f42\x2da9e7\x2ddea1f7181291-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2hn2n.mount: Deactivated successfully. May 17 00:53:32.225656 systemd[1]: var-lib-kubelet-pods-9be6a26b\x2d108a\x2d4f42\x2da9e7\x2ddea1f7181291-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:53:32.225744 systemd[1]: var-lib-kubelet-pods-9be6a26b\x2d108a\x2d4f42\x2da9e7\x2ddea1f7181291-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:53:33.180353 kubelet[2603]: E0517 00:53:33.180279 2603 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:53:33.244594 sshd[4133]: pam_unix(sshd:session): session closed for user core May 17 00:53:33.247320 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. May 17 00:53:33.247469 systemd[1]: sshd@20-10.200.20.21:22-10.200.16.10:53136.service: Deactivated successfully. May 17 00:53:33.248221 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:53:33.248659 systemd-logind[1545]: Removed session 23. May 17 00:53:33.324125 systemd[1]: Started sshd@21-10.200.20.21:22-10.200.16.10:39468.service. May 17 00:53:33.809735 sshd[4302]: Accepted publickey for core from 10.200.16.10 port 39468 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:33.810975 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:33.815306 systemd[1]: Started session-24.scope. May 17 00:53:33.815507 systemd-logind[1545]: New session 24 of user core. May 17 00:53:35.219768 kubelet[2603]: E0517 00:53:35.219733 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" containerName="apply-sysctl-overwrites" May 17 00:53:35.220191 kubelet[2603]: E0517 00:53:35.220177 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" containerName="mount-bpf-fs" May 17 00:53:35.220251 kubelet[2603]: E0517 00:53:35.220241 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb2b49a7-c400-4835-861f-4b4a7927dd12" containerName="cilium-operator" May 17 00:53:35.220315 kubelet[2603]: E0517 00:53:35.220304 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" containerName="clean-cilium-state" May 17 00:53:35.220367 kubelet[2603]: E0517 00:53:35.220357 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" containerName="cilium-agent" May 17 00:53:35.220419 kubelet[2603]: E0517 00:53:35.220410 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" containerName="mount-cgroup" May 17 00:53:35.220500 kubelet[2603]: I0517 00:53:35.220490 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be6a26b-108a-4f42-a9e7-dea1f7181291" containerName="cilium-agent" May 17 00:53:35.220556 kubelet[2603]: I0517 00:53:35.220546 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2b49a7-c400-4835-861f-4b4a7927dd12" containerName="cilium-operator" May 17 00:53:35.259069 sshd[4302]: pam_unix(sshd:session): session closed for user core May 17 00:53:35.261601 systemd[1]: sshd@21-10.200.20.21:22-10.200.16.10:39468.service: Deactivated successfully. May 17 00:53:35.262815 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:53:35.263343 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. May 17 00:53:35.264475 systemd-logind[1545]: Removed session 24. May 17 00:53:35.289836 kubelet[2603]: I0517 00:53:35.289796 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-run\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.289959 kubelet[2603]: I0517 00:53:35.289872 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-config-path\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.289959 kubelet[2603]: I0517 00:53:35.289895 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-ipsec-secrets\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.289959 kubelet[2603]: I0517 00:53:35.289911 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-hubble-tls\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.289959 kubelet[2603]: I0517 00:53:35.289950 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-net\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290070 kubelet[2603]: I0517 00:53:35.289967 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56df\" (UniqueName: \"kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-kube-api-access-k56df\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290070 kubelet[2603]: I0517 00:53:35.289985 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-clustermesh-secrets\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290070 kubelet[2603]: I0517 00:53:35.290037 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-bpf-maps\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290070 kubelet[2603]: I0517 00:53:35.290053 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-lib-modules\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290158 kubelet[2603]: I0517 00:53:35.290072 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-cgroup\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290158 kubelet[2603]: I0517 00:53:35.290112 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cni-path\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290158 kubelet[2603]: I0517 00:53:35.290128 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-etc-cni-netd\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290158 kubelet[2603]: I0517 00:53:35.290146 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-hostproc\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290247 kubelet[2603]: I0517 00:53:35.290183 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-kernel\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.290247 kubelet[2603]: I0517 00:53:35.290200 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-xtables-lock\") pod \"cilium-645mf\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " pod="kube-system/cilium-645mf" May 17 00:53:35.336842 systemd[1]: Started sshd@22-10.200.20.21:22-10.200.16.10:39472.service. May 17 00:53:35.524429 env[1557]: time="2025-05-17T00:53:35.523564365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-645mf,Uid:6511093e-57c9-4a5e-b642-bfbb56547246,Namespace:kube-system,Attempt:0,}" May 17 00:53:35.559554 env[1557]: time="2025-05-17T00:53:35.559493528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:35.559724 env[1557]: time="2025-05-17T00:53:35.559702363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:35.560038 env[1557]: time="2025-05-17T00:53:35.560012635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:35.560427 env[1557]: time="2025-05-17T00:53:35.560374026Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b pid=4330 runtime=io.containerd.runc.v2 May 17 00:53:35.592992 env[1557]: time="2025-05-17T00:53:35.592941232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-645mf,Uid:6511093e-57c9-4a5e-b642-bfbb56547246,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b\"" May 17 00:53:35.597567 env[1557]: time="2025-05-17T00:53:35.597522800Z" level=info msg="CreateContainer within sandbox \"9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:53:35.631846 env[1557]: time="2025-05-17T00:53:35.631779884Z" level=info msg="CreateContainer within sandbox \"9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49\"" May 17 00:53:35.632984 env[1557]: time="2025-05-17T00:53:35.632960215Z" level=info msg="StartContainer for \"fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49\"" May 17 00:53:35.683107 env[1557]: time="2025-05-17T00:53:35.683067872Z" level=info msg="StartContainer for \"fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49\" returns successfully" May 17 00:53:35.752996 env[1557]: time="2025-05-17T00:53:35.752952886Z" level=info msg="shim disconnected" id=fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49 May 17 00:53:35.753221 env[1557]: time="2025-05-17T00:53:35.753204520Z" level=warning msg="cleaning up after shim disconnected" id=fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49 namespace=k8s.io May 17 00:53:35.753281 env[1557]: time="2025-05-17T00:53:35.753269599Z" level=info msg="cleaning up dead shim" May 17 00:53:35.760392 env[1557]: time="2025-05-17T00:53:35.760356586Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4413 runtime=io.containerd.runc.v2\n" May 17 00:53:35.783732 sshd[4316]: Accepted publickey for core from 10.200.16.10 port 39472 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:35.785392 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:35.789785 systemd[1]: Started session-25.scope. May 17 00:53:35.790012 systemd-logind[1545]: New session 25 of user core. May 17 00:53:36.214594 sshd[4316]: pam_unix(sshd:session): session closed for user core May 17 00:53:36.217007 systemd-logind[1545]: Session 25 logged out. Waiting for processes to exit. May 17 00:53:36.217288 systemd[1]: sshd@22-10.200.20.21:22-10.200.16.10:39472.service: Deactivated successfully. May 17 00:53:36.218036 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:53:36.219095 systemd-logind[1545]: Removed session 25. May 17 00:53:36.291516 systemd[1]: Started sshd@23-10.200.20.21:22-10.200.16.10:39484.service. May 17 00:53:36.496368 env[1557]: time="2025-05-17T00:53:36.496185528Z" level=info msg="StopPodSandbox for \"9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b\"" May 17 00:53:36.497647 env[1557]: time="2025-05-17T00:53:36.497522536Z" level=info msg="Container to stop \"fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:53:36.499706 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b-shm.mount: Deactivated successfully. May 17 00:53:36.538537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b-rootfs.mount: Deactivated successfully. May 17 00:53:36.569714 env[1557]: time="2025-05-17T00:53:36.569667556Z" level=info msg="shim disconnected" id=9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b May 17 00:53:36.570191 env[1557]: time="2025-05-17T00:53:36.570162224Z" level=warning msg="cleaning up after shim disconnected" id=9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b namespace=k8s.io May 17 00:53:36.570262 env[1557]: time="2025-05-17T00:53:36.570248302Z" level=info msg="cleaning up dead shim" May 17 00:53:36.578545 env[1557]: time="2025-05-17T00:53:36.578505742Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4458 runtime=io.containerd.runc.v2\n" May 17 00:53:36.579008 env[1557]: time="2025-05-17T00:53:36.578982131Z" level=info msg="TearDown network for sandbox \"9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b\" successfully" May 17 00:53:36.579095 env[1557]: time="2025-05-17T00:53:36.579079008Z" level=info msg="StopPodSandbox for \"9f12ff49ed9ecd9b899274385b8c51077fc7c57b130ffb69a8bf04af49e82f2b\" returns successfully" May 17 00:53:36.602565 kubelet[2603]: I0517 00:53:36.602528 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-xtables-lock\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.603151 kubelet[2603]: I0517 00:53:36.603130 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-config-path\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.603361 kubelet[2603]: I0517 00:53:36.603344 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-cgroup\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.603605 kubelet[2603]: I0517 00:53:36.603589 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-run\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.608737 kubelet[2603]: I0517 00:53:36.608713 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-ipsec-secrets\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.608903 kubelet[2603]: I0517 00:53:36.608889 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-hostproc\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.608976 kubelet[2603]: I0517 00:53:36.608965 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cni-path\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609050 kubelet[2603]: I0517 00:53:36.609039 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-lib-modules\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609115 kubelet[2603]: I0517 00:53:36.609104 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-kernel\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609187 kubelet[2603]: I0517 00:53:36.609175 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-net\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609255 kubelet[2603]: I0517 00:53:36.609244 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-etc-cni-netd\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609322 kubelet[2603]: I0517 00:53:36.609312 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-hubble-tls\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609393 kubelet[2603]: I0517 00:53:36.609383 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56df\" (UniqueName: \"kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-kube-api-access-k56df\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609463 kubelet[2603]: I0517 00:53:36.609452 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-clustermesh-secrets\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.609529 kubelet[2603]: I0517 00:53:36.609519 2603 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-bpf-maps\") pod \"6511093e-57c9-4a5e-b642-bfbb56547246\" (UID: \"6511093e-57c9-4a5e-b642-bfbb56547246\") " May 17 00:53:36.610183 kubelet[2603]: I0517 00:53:36.602749 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610183 kubelet[2603]: I0517 00:53:36.604058 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610183 kubelet[2603]: I0517 00:53:36.605041 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610183 kubelet[2603]: I0517 00:53:36.609612 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610324 kubelet[2603]: I0517 00:53:36.610203 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-hostproc" (OuterVolumeSpecName: "hostproc") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610324 kubelet[2603]: I0517 00:53:36.610223 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cni-path" (OuterVolumeSpecName: "cni-path") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610324 kubelet[2603]: I0517 00:53:36.610237 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610324 kubelet[2603]: I0517 00:53:36.610250 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610324 kubelet[2603]: I0517 00:53:36.610265 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.610436 kubelet[2603]: I0517 00:53:36.610277 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:53:36.613417 kubelet[2603]: I0517 00:53:36.613372 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:53:36.615962 systemd[1]: var-lib-kubelet-pods-6511093e\x2d57c9\x2d4a5e\x2db642\x2dbfbb56547246-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:53:36.619035 kubelet[2603]: I0517 00:53:36.619006 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:53:36.620599 systemd[1]: var-lib-kubelet-pods-6511093e\x2d57c9\x2d4a5e\x2db642\x2dbfbb56547246-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:53:36.620746 systemd[1]: var-lib-kubelet-pods-6511093e\x2d57c9\x2d4a5e\x2db642\x2dbfbb56547246-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk56df.mount: Deactivated successfully. May 17 00:53:36.620930 kubelet[2603]: I0517 00:53:36.620899 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:36.621023 kubelet[2603]: I0517 00:53:36.621009 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-kube-api-access-k56df" (OuterVolumeSpecName: "kube-api-access-k56df") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "kube-api-access-k56df". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:53:36.621480 kubelet[2603]: I0517 00:53:36.621461 2603 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6511093e-57c9-4a5e-b642-bfbb56547246" (UID: "6511093e-57c9-4a5e-b642-bfbb56547246"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:53:36.709823 kubelet[2603]: I0517 00:53:36.709781 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-config-path\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710010 kubelet[2603]: I0517 00:53:36.709997 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-cgroup\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710096 kubelet[2603]: I0517 00:53:36.710086 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-run\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710169 kubelet[2603]: I0517 00:53:36.710158 2603 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710235 kubelet[2603]: I0517 00:53:36.710225 2603 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-hostproc\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710305 kubelet[2603]: I0517 00:53:36.710296 2603 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-cni-path\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710367 kubelet[2603]: I0517 00:53:36.710356 2603 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-lib-modules\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710431 kubelet[2603]: I0517 00:53:36.710414 2603 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710494 kubelet[2603]: I0517 00:53:36.710484 2603 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-host-proc-sys-net\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710566 kubelet[2603]: I0517 00:53:36.710554 2603 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-etc-cni-netd\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710645 kubelet[2603]: I0517 00:53:36.710620 2603 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-hubble-tls\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710727 kubelet[2603]: I0517 00:53:36.710717 2603 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56df\" (UniqueName: \"kubernetes.io/projected/6511093e-57c9-4a5e-b642-bfbb56547246-kube-api-access-k56df\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710794 kubelet[2603]: I0517 00:53:36.710784 2603 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6511093e-57c9-4a5e-b642-bfbb56547246-clustermesh-secrets\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710857 kubelet[2603]: I0517 00:53:36.710846 2603 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-bpf-maps\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.710925 kubelet[2603]: I0517 00:53:36.710905 2603 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6511093e-57c9-4a5e-b642-bfbb56547246-xtables-lock\") on node \"ci-3510.3.7-n-ce3994935d\" DevicePath \"\"" May 17 00:53:36.770110 sshd[4436]: Accepted publickey for core from 10.200.16.10 port 39484 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:53:36.773621 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:36.777595 systemd[1]: Started session-26.scope. May 17 00:53:36.777944 systemd-logind[1545]: New session 26 of user core. May 17 00:53:37.400058 systemd[1]: var-lib-kubelet-pods-6511093e\x2d57c9\x2d4a5e\x2db642\x2dbfbb56547246-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:53:37.498984 kubelet[2603]: I0517 00:53:37.498959 2603 scope.go:117] "RemoveContainer" containerID="fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49" May 17 00:53:37.505556 env[1557]: time="2025-05-17T00:53:37.505509403Z" level=info msg="RemoveContainer for \"fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49\"" May 17 00:53:37.526255 env[1557]: time="2025-05-17T00:53:37.526217789Z" level=info msg="RemoveContainer for \"fc7ae7b77e196c8c51ee035d7df42b7ea481ab0cd3a1588c0029fa9a24af7c49\" returns successfully" May 17 00:53:37.557709 kubelet[2603]: E0517 00:53:37.557678 2603 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6511093e-57c9-4a5e-b642-bfbb56547246" containerName="mount-cgroup" May 17 00:53:37.557953 kubelet[2603]: I0517 00:53:37.557940 2603 memory_manager.go:354] "RemoveStaleState removing state" podUID="6511093e-57c9-4a5e-b642-bfbb56547246" containerName="mount-cgroup" May 17 00:53:37.616487 kubelet[2603]: I0517 00:53:37.616456 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1617bfe2-a5db-439f-8aa4-0a094a9f331b-cilium-config-path\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.616939 kubelet[2603]: I0517 00:53:37.616921 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-bpf-maps\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617041 kubelet[2603]: I0517 00:53:37.617029 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-hostproc\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617133 kubelet[2603]: I0517 00:53:37.617121 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-etc-cni-netd\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617231 kubelet[2603]: I0517 00:53:37.617218 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-host-proc-sys-net\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617322 kubelet[2603]: I0517 00:53:37.617310 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-cni-path\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617418 kubelet[2603]: I0517 00:53:37.617405 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvp86\" (UniqueName: \"kubernetes.io/projected/1617bfe2-a5db-439f-8aa4-0a094a9f331b-kube-api-access-mvp86\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617511 kubelet[2603]: I0517 00:53:37.617498 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-host-proc-sys-kernel\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617603 kubelet[2603]: I0517 00:53:37.617591 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-cilium-run\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617709 kubelet[2603]: I0517 00:53:37.617688 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-cilium-cgroup\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617796 kubelet[2603]: I0517 00:53:37.617783 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-lib-modules\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617880 kubelet[2603]: I0517 00:53:37.617866 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1617bfe2-a5db-439f-8aa4-0a094a9f331b-xtables-lock\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.617953 kubelet[2603]: I0517 00:53:37.617941 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1617bfe2-a5db-439f-8aa4-0a094a9f331b-clustermesh-secrets\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.618048 kubelet[2603]: I0517 00:53:37.618034 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1617bfe2-a5db-439f-8aa4-0a094a9f331b-cilium-ipsec-secrets\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.618127 kubelet[2603]: I0517 00:53:37.618116 2603 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1617bfe2-a5db-439f-8aa4-0a094a9f331b-hubble-tls\") pod \"cilium-zwsz6\" (UID: \"1617bfe2-a5db-439f-8aa4-0a094a9f331b\") " pod="kube-system/cilium-zwsz6" May 17 00:53:37.864863 env[1557]: time="2025-05-17T00:53:37.864780557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwsz6,Uid:1617bfe2-a5db-439f-8aa4-0a094a9f331b,Namespace:kube-system,Attempt:0,}" May 17 00:53:37.896522 env[1557]: time="2025-05-17T00:53:37.896335125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:53:37.896522 env[1557]: time="2025-05-17T00:53:37.896371484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:53:37.896522 env[1557]: time="2025-05-17T00:53:37.896381724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:53:37.896728 env[1557]: time="2025-05-17T00:53:37.896650638Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f pid=4495 runtime=io.containerd.runc.v2 May 17 00:53:37.972145 env[1557]: time="2025-05-17T00:53:37.972088839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwsz6,Uid:1617bfe2-a5db-439f-8aa4-0a094a9f331b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\"" May 17 00:53:37.981654 env[1557]: time="2025-05-17T00:53:37.981587413Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:53:38.014503 env[1557]: time="2025-05-17T00:53:38.014452033Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21ef6742d71d3aa16a839e088821314daee4880c9e876d1f18fb8b0aea348cb8\"" May 17 00:53:38.015270 env[1557]: time="2025-05-17T00:53:38.015245094Z" level=info msg="StartContainer for \"21ef6742d71d3aa16a839e088821314daee4880c9e876d1f18fb8b0aea348cb8\"" May 17 00:53:38.067345 env[1557]: time="2025-05-17T00:53:38.067037833Z" level=info msg="StartContainer for \"21ef6742d71d3aa16a839e088821314daee4880c9e876d1f18fb8b0aea348cb8\" returns successfully" May 17 00:53:38.087038 kubelet[2603]: I0517 00:53:38.086783 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6511093e-57c9-4a5e-b642-bfbb56547246" path="/var/lib/kubelet/pods/6511093e-57c9-4a5e-b642-bfbb56547246/volumes" May 17 00:53:38.130263 env[1557]: time="2025-05-17T00:53:38.130144026Z" level=info msg="shim disconnected" id=21ef6742d71d3aa16a839e088821314daee4880c9e876d1f18fb8b0aea348cb8 May 17 00:53:38.130263 env[1557]: time="2025-05-17T00:53:38.130193585Z" level=warning msg="cleaning up after shim disconnected" id=21ef6742d71d3aa16a839e088821314daee4880c9e876d1f18fb8b0aea348cb8 namespace=k8s.io May 17 00:53:38.130263 env[1557]: time="2025-05-17T00:53:38.130202465Z" level=info msg="cleaning up dead shim" May 17 00:53:38.137044 env[1557]: time="2025-05-17T00:53:38.136997305Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4573 runtime=io.containerd.runc.v2\n" May 17 00:53:38.181319 kubelet[2603]: E0517 00:53:38.181280 2603 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:53:38.505263 env[1557]: time="2025-05-17T00:53:38.505223547Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:53:38.558357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488279048.mount: Deactivated successfully. May 17 00:53:38.564793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592660086.mount: Deactivated successfully. May 17 00:53:38.576509 env[1557]: time="2025-05-17T00:53:38.576462988Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"84a163a350a2fb9de1039928e35d6795bd62189973559a50e0a1b8db8523b483\"" May 17 00:53:38.578359 env[1557]: time="2025-05-17T00:53:38.577240170Z" level=info msg="StartContainer for \"84a163a350a2fb9de1039928e35d6795bd62189973559a50e0a1b8db8523b483\"" May 17 00:53:38.623666 env[1557]: time="2025-05-17T00:53:38.622573021Z" level=info msg="StartContainer for \"84a163a350a2fb9de1039928e35d6795bd62189973559a50e0a1b8db8523b483\" returns successfully" May 17 00:53:38.650054 env[1557]: time="2025-05-17T00:53:38.650006095Z" level=info msg="shim disconnected" id=84a163a350a2fb9de1039928e35d6795bd62189973559a50e0a1b8db8523b483 May 17 00:53:38.650054 env[1557]: time="2025-05-17T00:53:38.650051254Z" level=warning msg="cleaning up after shim disconnected" id=84a163a350a2fb9de1039928e35d6795bd62189973559a50e0a1b8db8523b483 namespace=k8s.io May 17 00:53:38.650054 env[1557]: time="2025-05-17T00:53:38.650061414Z" level=info msg="cleaning up dead shim" May 17 00:53:38.656089 env[1557]: time="2025-05-17T00:53:38.656045553Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4635 runtime=io.containerd.runc.v2\n" May 17 00:53:39.513736 env[1557]: time="2025-05-17T00:53:39.513694000Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:53:39.555657 env[1557]: time="2025-05-17T00:53:39.555577065Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"46af8e050cb37aa40a3604a70d34b5c7e0d9e1a0910b5f2240484935cabde2b2\"" May 17 00:53:39.557907 env[1557]: time="2025-05-17T00:53:39.557876371Z" level=info msg="StartContainer for \"46af8e050cb37aa40a3604a70d34b5c7e0d9e1a0910b5f2240484935cabde2b2\"" May 17 00:53:39.624563 env[1557]: time="2025-05-17T00:53:39.624509419Z" level=info msg="StartContainer for \"46af8e050cb37aa40a3604a70d34b5c7e0d9e1a0910b5f2240484935cabde2b2\" returns successfully" May 17 00:53:39.650478 env[1557]: time="2025-05-17T00:53:39.650426775Z" level=info msg="shim disconnected" id=46af8e050cb37aa40a3604a70d34b5c7e0d9e1a0910b5f2240484935cabde2b2 May 17 00:53:39.650478 env[1557]: time="2025-05-17T00:53:39.650473094Z" level=warning msg="cleaning up after shim disconnected" id=46af8e050cb37aa40a3604a70d34b5c7e0d9e1a0910b5f2240484935cabde2b2 namespace=k8s.io May 17 00:53:39.650478 env[1557]: time="2025-05-17T00:53:39.650482254Z" level=info msg="cleaning up dead shim" May 17 00:53:39.657682 env[1557]: time="2025-05-17T00:53:39.657610288Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4694 runtime=io.containerd.runc.v2\n" May 17 00:53:40.400299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46af8e050cb37aa40a3604a70d34b5c7e0d9e1a0910b5f2240484935cabde2b2-rootfs.mount: Deactivated successfully. May 17 00:53:40.519197 env[1557]: time="2025-05-17T00:53:40.518990360Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:53:40.548466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422573920.mount: Deactivated successfully. May 17 00:53:40.556282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857299766.mount: Deactivated successfully. May 17 00:53:40.568038 env[1557]: time="2025-05-17T00:53:40.567966272Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72b39e76eb30f0e280dfde0d8a84087cd84458921fb05893ebeff8fa48f7646e\"" May 17 00:53:40.569993 env[1557]: time="2025-05-17T00:53:40.568696135Z" level=info msg="StartContainer for \"72b39e76eb30f0e280dfde0d8a84087cd84458921fb05893ebeff8fa48f7646e\"" May 17 00:53:40.615091 env[1557]: time="2025-05-17T00:53:40.615049748Z" level=info msg="StartContainer for \"72b39e76eb30f0e280dfde0d8a84087cd84458921fb05893ebeff8fa48f7646e\" returns successfully" May 17 00:53:40.645362 env[1557]: time="2025-05-17T00:53:40.645309371Z" level=info msg="shim disconnected" id=72b39e76eb30f0e280dfde0d8a84087cd84458921fb05893ebeff8fa48f7646e May 17 00:53:40.645675 env[1557]: time="2025-05-17T00:53:40.645610924Z" level=warning msg="cleaning up after shim disconnected" id=72b39e76eb30f0e280dfde0d8a84087cd84458921fb05893ebeff8fa48f7646e namespace=k8s.io May 17 00:53:40.645785 env[1557]: time="2025-05-17T00:53:40.645769520Z" level=info msg="cleaning up dead shim" May 17 00:53:40.652668 env[1557]: time="2025-05-17T00:53:40.652288610Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:53:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4751 runtime=io.containerd.runc.v2\n" May 17 00:53:41.523812 env[1557]: time="2025-05-17T00:53:41.523763039Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:53:41.547208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879462415.mount: Deactivated successfully. May 17 00:53:41.562060 env[1557]: time="2025-05-17T00:53:41.562018048Z" level=info msg="CreateContainer within sandbox \"e88a0c2f56482c76142856f4c6af0f09c18859a784dc656437cc2158df6ccb1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66\"" May 17 00:53:41.562821 env[1557]: time="2025-05-17T00:53:41.562795750Z" level=info msg="StartContainer for \"4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66\"" May 17 00:53:41.620196 env[1557]: time="2025-05-17T00:53:41.620148805Z" level=info msg="StartContainer for \"4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66\" returns successfully" May 17 00:53:41.919918 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 17 00:53:42.283797 kubelet[2603]: I0517 00:53:42.283404 2603 setters.go:600] "Node became not ready" node="ci-3510.3.7-n-ce3994935d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:53:42Z","lastTransitionTime":"2025-05-17T00:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:53:42.544688 kubelet[2603]: I0517 00:53:42.544550 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zwsz6" podStartSLOduration=5.544533701 podStartE2EDuration="5.544533701s" podCreationTimestamp="2025-05-17 00:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:53:42.54412423 +0000 UTC m=+194.570524817" watchObservedRunningTime="2025-05-17 00:53:42.544533701 +0000 UTC m=+194.570934288" May 17 00:53:43.238257 systemd[1]: run-containerd-runc-k8s.io-4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66-runc.lb5eQZ.mount: Deactivated successfully. May 17 00:53:44.589023 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:53:44.587280 systemd-networkd[1758]: lxc_health: Link UP May 17 00:53:44.587619 systemd-networkd[1758]: lxc_health: Gained carrier May 17 00:53:45.367919 systemd[1]: run-containerd-runc-k8s.io-4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66-runc.Jel1jQ.mount: Deactivated successfully. May 17 00:53:45.622711 systemd-networkd[1758]: lxc_health: Gained IPv6LL May 17 00:53:47.544280 systemd[1]: run-containerd-runc-k8s.io-4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66-runc.fJKx6a.mount: Deactivated successfully. May 17 00:53:49.679214 systemd[1]: run-containerd-runc-k8s.io-4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66-runc.l1qdny.mount: Deactivated successfully. May 17 00:53:51.788466 systemd[1]: run-containerd-runc-k8s.io-4a4d14b93214c215685bc9a76022f93b4eaec0b3e45477127a89e112da476b66-runc.OGuLFX.mount: Deactivated successfully. May 17 00:53:51.936842 sshd[4436]: pam_unix(sshd:session): session closed for user core May 17 00:53:51.939421 systemd[1]: sshd@23-10.200.20.21:22-10.200.16.10:39484.service: Deactivated successfully. May 17 00:53:51.940171 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:53:51.941028 systemd-logind[1545]: Session 26 logged out. Waiting for processes to exit. May 17 00:53:51.942103 systemd-logind[1545]: Removed session 26.