Sep 13 01:32:55.010639 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 01:32:55.010657 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 01:32:55.010665 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 01:32:55.010672 kernel: printk: bootconsole [pl11] enabled Sep 13 01:32:55.010677 kernel: efi: EFI v2.70 by EDK II Sep 13 01:32:55.010683 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 13 01:32:55.010689 kernel: random: crng init done Sep 13 01:32:55.010694 kernel: ACPI: Early table checksum verification disabled Sep 13 01:32:55.010700 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 01:32:55.010705 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010710 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010716 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 01:32:55.010723 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010728 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010735 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010741 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010747 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010753 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010759 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 01:32:55.010765 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:55.010771 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 01:32:55.010776 kernel: NUMA: Failed to initialise from firmware Sep 13 01:32:55.010782 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:55.010788 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Sep 13 01:32:55.010793 kernel: Zone ranges: Sep 13 01:32:55.010799 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 01:32:55.010804 kernel: DMA32 empty Sep 13 01:32:55.010810 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:55.010817 kernel: Movable zone start for each node Sep 13 01:32:55.010822 kernel: Early memory node ranges Sep 13 01:32:55.010828 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 01:32:55.010834 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 01:32:55.010839 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 01:32:55.010845 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 01:32:55.010851 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 01:32:55.010856 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 01:32:55.010862 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:55.010867 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:55.010873 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 01:32:55.010879 kernel: psci: probing for conduit method from ACPI. Sep 13 01:32:55.010888 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 01:32:55.010894 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 01:32:55.010900 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 01:32:55.010906 kernel: psci: SMC Calling Convention v1.4 Sep 13 01:32:55.010912 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 13 01:32:55.010919 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 13 01:32:55.010926 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 01:32:55.010932 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 01:32:55.010938 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 01:32:55.010944 kernel: Detected PIPT I-cache on CPU0 Sep 13 01:32:55.010951 kernel: CPU features: detected: GIC system register CPU interface Sep 13 01:32:55.010957 kernel: CPU features: detected: Hardware dirty bit management Sep 13 01:32:55.010963 kernel: CPU features: detected: Spectre-BHB Sep 13 01:32:55.010969 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 01:32:55.010975 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 01:32:55.010981 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 01:32:55.010988 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 01:32:55.010994 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 01:32:55.011000 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 01:32:55.011006 kernel: Policy zone: Normal Sep 13 01:32:55.011014 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:55.011020 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:32:55.011026 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 01:32:55.011032 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:32:55.011038 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:32:55.011045 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 13 01:32:55.011051 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Sep 13 01:32:55.011058 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 01:32:55.011064 kernel: trace event string verifier disabled Sep 13 01:32:55.011070 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:32:55.011077 kernel: rcu: RCU event tracing is enabled. Sep 13 01:32:55.011083 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 01:32:55.011089 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:32:55.011096 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:32:55.011102 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:32:55.011108 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 01:32:55.011114 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 01:32:55.011120 kernel: GICv3: 960 SPIs implemented Sep 13 01:32:55.011127 kernel: GICv3: 0 Extended SPIs implemented Sep 13 01:32:55.011133 kernel: GICv3: Distributor has no Range Selector support Sep 13 01:32:55.011139 kernel: Root IRQ handler: gic_handle_irq Sep 13 01:32:55.011145 kernel: GICv3: 16 PPIs implemented Sep 13 01:32:55.011151 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 01:32:55.011157 kernel: ITS: No ITS available, not enabling LPIs Sep 13 01:32:55.011163 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:55.011169 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 01:32:55.011175 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 01:32:55.011182 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 01:32:55.011188 kernel: Console: colour dummy device 80x25 Sep 13 01:32:55.011196 kernel: printk: console [tty1] enabled Sep 13 01:32:55.011202 kernel: ACPI: Core revision 20210730 Sep 13 01:32:55.011209 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 01:32:55.011215 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:32:55.011221 kernel: LSM: Security Framework initializing Sep 13 01:32:55.011227 kernel: SELinux: Initializing. Sep 13 01:32:55.011233 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:55.011240 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:55.011246 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 01:32:55.011254 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 01:32:55.021289 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:32:55.021303 kernel: Remapping and enabling EFI services. Sep 13 01:32:55.021310 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:32:55.021317 kernel: Detected PIPT I-cache on CPU1 Sep 13 01:32:55.021324 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 01:32:55.021330 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:55.021337 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 01:32:55.021343 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:32:55.021350 kernel: SMP: Total of 2 processors activated. Sep 13 01:32:55.021361 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 01:32:55.021367 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 01:32:55.021374 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 01:32:55.021380 kernel: CPU features: detected: CRC32 instructions Sep 13 01:32:55.021386 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 01:32:55.021393 kernel: CPU features: detected: LSE atomic instructions Sep 13 01:32:55.021399 kernel: CPU features: detected: Privileged Access Never Sep 13 01:32:55.021405 kernel: CPU: All CPU(s) started at EL1 Sep 13 01:32:55.021412 kernel: alternatives: patching kernel code Sep 13 01:32:55.021419 kernel: devtmpfs: initialized Sep 13 01:32:55.021431 kernel: KASLR enabled Sep 13 01:32:55.021438 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:32:55.021446 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 01:32:55.021452 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:32:55.021459 kernel: SMBIOS 3.1.0 present. Sep 13 01:32:55.021466 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 01:32:55.021472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:32:55.021479 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 01:32:55.021487 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 01:32:55.021494 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 01:32:55.021500 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:32:55.021507 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Sep 13 01:32:55.021514 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:32:55.021520 kernel: cpuidle: using governor menu Sep 13 01:32:55.021527 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 01:32:55.021535 kernel: ASID allocator initialised with 32768 entries Sep 13 01:32:55.021541 kernel: ACPI: bus type PCI registered Sep 13 01:32:55.021548 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:32:55.021555 kernel: Serial: AMBA PL011 UART driver Sep 13 01:32:55.021561 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:32:55.021568 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 01:32:55.021575 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:32:55.021581 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 01:32:55.021588 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:32:55.021596 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 01:32:55.021603 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:32:55.021609 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:32:55.021616 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:32:55.021622 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:32:55.021629 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:32:55.021635 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:32:55.021642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:32:55.021649 kernel: ACPI: Interpreter enabled Sep 13 01:32:55.021656 kernel: ACPI: Using GIC for interrupt routing Sep 13 01:32:55.021663 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 01:32:55.021669 kernel: printk: console [ttyAMA0] enabled Sep 13 01:32:55.021676 kernel: printk: bootconsole [pl11] disabled Sep 13 01:32:55.021683 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 01:32:55.021689 kernel: iommu: Default domain type: Translated Sep 13 01:32:55.021696 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 01:32:55.021703 kernel: vgaarb: loaded Sep 13 01:32:55.021709 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:32:55.021716 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:32:55.021724 kernel: PTP clock support registered Sep 13 01:32:55.021730 kernel: Registered efivars operations Sep 13 01:32:55.021737 kernel: No ACPI PMU IRQ for CPU0 Sep 13 01:32:55.021743 kernel: No ACPI PMU IRQ for CPU1 Sep 13 01:32:55.021750 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 01:32:55.021757 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:32:55.021764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:32:55.021770 kernel: pnp: PnP ACPI init Sep 13 01:32:55.021777 kernel: pnp: PnP ACPI: found 0 devices Sep 13 01:32:55.021785 kernel: NET: Registered PF_INET protocol family Sep 13 01:32:55.021792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 01:32:55.021798 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 01:32:55.021805 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:32:55.021812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:32:55.021819 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 01:32:55.021825 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 01:32:55.021832 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:55.021840 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:55.021846 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:32:55.021853 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:32:55.021860 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 01:32:55.021866 kernel: kvm [1]: HYP mode not available Sep 13 01:32:55.021873 kernel: Initialise system trusted keyrings Sep 13 01:32:55.021879 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 01:32:55.021886 kernel: Key type asymmetric registered Sep 13 01:32:55.021892 kernel: Asymmetric key parser 'x509' registered Sep 13 01:32:55.021900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:32:55.021906 kernel: io scheduler mq-deadline registered Sep 13 01:32:55.021913 kernel: io scheduler kyber registered Sep 13 01:32:55.021919 kernel: io scheduler bfq registered Sep 13 01:32:55.021926 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:32:55.021932 kernel: thunder_xcv, ver 1.0 Sep 13 01:32:55.021939 kernel: thunder_bgx, ver 1.0 Sep 13 01:32:55.021946 kernel: nicpf, ver 1.0 Sep 13 01:32:55.021952 kernel: nicvf, ver 1.0 Sep 13 01:32:55.022078 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 01:32:55.022141 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T01:32:54 UTC (1757727174) Sep 13 01:32:55.022151 kernel: efifb: probing for efifb Sep 13 01:32:55.022158 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 01:32:55.022164 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 01:32:55.022171 kernel: efifb: scrolling: redraw Sep 13 01:32:55.022178 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 01:32:55.022185 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:32:55.022193 kernel: fb0: EFI VGA frame buffer device Sep 13 01:32:55.022199 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 01:32:55.022206 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:32:55.022213 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:32:55.022219 kernel: Segment Routing with IPv6 Sep 13 01:32:55.022226 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:32:55.022233 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:32:55.022239 kernel: Key type dns_resolver registered Sep 13 01:32:55.022246 kernel: registered taskstats version 1 Sep 13 01:32:55.022252 kernel: Loading compiled-in X.509 certificates Sep 13 01:32:55.022274 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 01:32:55.022282 kernel: Key type .fscrypt registered Sep 13 01:32:55.022289 kernel: Key type fscrypt-provisioning registered Sep 13 01:32:55.022296 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:32:55.022302 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:32:55.022309 kernel: ima: No architecture policies found Sep 13 01:32:55.022316 kernel: clk: Disabling unused clocks Sep 13 01:32:55.022322 kernel: Freeing unused kernel memory: 36416K Sep 13 01:32:55.022331 kernel: Run /init as init process Sep 13 01:32:55.022337 kernel: with arguments: Sep 13 01:32:55.022344 kernel: /init Sep 13 01:32:55.022350 kernel: with environment: Sep 13 01:32:55.022357 kernel: HOME=/ Sep 13 01:32:55.022363 kernel: TERM=linux Sep 13 01:32:55.022369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:32:55.022378 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:32:55.022389 systemd[1]: Detected virtualization microsoft. Sep 13 01:32:55.022396 systemd[1]: Detected architecture arm64. Sep 13 01:32:55.022403 systemd[1]: Running in initrd. Sep 13 01:32:55.022410 systemd[1]: No hostname configured, using default hostname. Sep 13 01:32:55.022417 systemd[1]: Hostname set to . Sep 13 01:32:55.022424 systemd[1]: Initializing machine ID from random generator. Sep 13 01:32:55.022431 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:32:55.022438 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:32:55.022446 systemd[1]: Reached target cryptsetup.target. Sep 13 01:32:55.022453 systemd[1]: Reached target paths.target. Sep 13 01:32:55.022460 systemd[1]: Reached target slices.target. Sep 13 01:32:55.022467 systemd[1]: Reached target swap.target. Sep 13 01:32:55.022474 systemd[1]: Reached target timers.target. Sep 13 01:32:55.022482 systemd[1]: Listening on iscsid.socket. Sep 13 01:32:55.022489 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:32:55.022496 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:32:55.022504 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:32:55.022511 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:32:55.022518 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:32:55.022526 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:32:55.022533 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:32:55.022540 systemd[1]: Reached target sockets.target. Sep 13 01:32:55.022547 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:32:55.022554 systemd[1]: Finished network-cleanup.service. Sep 13 01:32:55.022561 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:32:55.022570 systemd[1]: Starting systemd-journald.service... Sep 13 01:32:55.022577 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:32:55.022584 systemd[1]: Starting systemd-resolved.service... Sep 13 01:32:55.022591 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:32:55.022601 systemd-journald[276]: Journal started Sep 13 01:32:55.022644 systemd-journald[276]: Runtime Journal (/run/log/journal/2b256319e7fa431bb798dd8431f72ba9) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:32:55.014272 systemd-modules-load[277]: Inserted module 'overlay' Sep 13 01:32:55.056841 systemd[1]: Started systemd-journald.service. Sep 13 01:32:55.056893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:32:55.049947 systemd-resolved[278]: Positive Trust Anchors: Sep 13 01:32:55.090187 kernel: audit: type=1130 audit(1757727175.061:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.090211 kernel: Bridge firewalling registered Sep 13 01:32:55.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.049969 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:32:55.152491 kernel: audit: type=1130 audit(1757727175.084:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.152516 kernel: audit: type=1130 audit(1757727175.108:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.152535 kernel: SCSI subsystem initialized Sep 13 01:32:55.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.049999 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:32:55.242337 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:32:55.242363 kernel: audit: type=1130 audit(1757727175.122:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.242374 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:32:55.242382 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:32:55.242391 kernel: audit: type=1130 audit(1757727175.147:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.052108 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 13 01:32:55.061612 systemd[1]: Started systemd-resolved.service. Sep 13 01:32:55.085224 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:32:55.093511 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 13 01:32:55.320386 kernel: audit: type=1130 audit(1757727175.270:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.320421 kernel: audit: type=1130 audit(1757727175.298:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.109197 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:32:55.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.122414 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:32:55.355433 kernel: audit: type=1130 audit(1757727175.325:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.355479 dracut-cmdline[295]: dracut-dracut-053 Sep 13 01:32:55.355479 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Sep 13 01:32:55.355479 dracut-cmdline[295]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:55.147924 systemd[1]: Reached target nss-lookup.target. Sep 13 01:32:55.221084 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:32:55.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.238625 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 13 01:32:55.243325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:32:55.260463 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:32:55.446151 kernel: audit: type=1130 audit(1757727175.408:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.271240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:32:55.299419 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:32:55.326755 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:32:55.463409 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:32:55.355220 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:32:55.401117 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:32:55.480280 kernel: iscsi: registered transport (tcp) Sep 13 01:32:55.502092 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:32:55.502146 kernel: QLogic iSCSI HBA Driver Sep 13 01:32:55.536987 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:32:55.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.542360 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:32:55.600288 kernel: raid6: neonx8 gen() 13757 MB/s Sep 13 01:32:55.619272 kernel: raid6: neonx8 xor() 10810 MB/s Sep 13 01:32:55.639274 kernel: raid6: neonx4 gen() 13526 MB/s Sep 13 01:32:55.660270 kernel: raid6: neonx4 xor() 11175 MB/s Sep 13 01:32:55.681271 kernel: raid6: neonx2 gen() 12947 MB/s Sep 13 01:32:55.701270 kernel: raid6: neonx2 xor() 10278 MB/s Sep 13 01:32:55.721274 kernel: raid6: neonx1 gen() 10472 MB/s Sep 13 01:32:55.742273 kernel: raid6: neonx1 xor() 8774 MB/s Sep 13 01:32:55.763269 kernel: raid6: int64x8 gen() 6273 MB/s Sep 13 01:32:55.800268 kernel: raid6: int64x8 xor() 3542 MB/s Sep 13 01:32:55.810286 kernel: raid6: int64x4 gen() 7212 MB/s Sep 13 01:32:55.825275 kernel: raid6: int64x4 xor() 3844 MB/s Sep 13 01:32:55.846270 kernel: raid6: int64x2 gen() 6155 MB/s Sep 13 01:32:55.867271 kernel: raid6: int64x2 xor() 3321 MB/s Sep 13 01:32:55.888270 kernel: raid6: int64x1 gen() 5044 MB/s Sep 13 01:32:55.913864 kernel: raid6: int64x1 xor() 2647 MB/s Sep 13 01:32:55.913883 kernel: raid6: using algorithm neonx8 gen() 13757 MB/s Sep 13 01:32:55.913900 kernel: raid6: .... xor() 10810 MB/s, rmw enabled Sep 13 01:32:55.918281 kernel: raid6: using neon recovery algorithm Sep 13 01:32:55.935272 kernel: xor: measuring software checksum speed Sep 13 01:32:55.943011 kernel: 8regs : 16139 MB/sec Sep 13 01:32:55.943031 kernel: 32regs : 20702 MB/sec Sep 13 01:32:55.946877 kernel: arm64_neon : 27813 MB/sec Sep 13 01:32:55.946895 kernel: xor: using function: arm64_neon (27813 MB/sec) Sep 13 01:32:56.008279 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 01:32:56.018441 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:32:56.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.026000 audit: BPF prog-id=7 op=LOAD Sep 13 01:32:56.026000 audit: BPF prog-id=8 op=LOAD Sep 13 01:32:56.027552 systemd[1]: Starting systemd-udevd.service... Sep 13 01:32:56.045593 systemd-udevd[475]: Using default interface naming scheme 'v252'. Sep 13 01:32:56.052377 systemd[1]: Started systemd-udevd.service. Sep 13 01:32:56.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.062427 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:32:56.076969 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 13 01:32:56.102819 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:32:56.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.108182 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:32:56.147507 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:32:56.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:56.208282 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 01:32:56.225290 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 01:32:56.230282 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 01:32:56.251958 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 13 01:32:56.252015 kernel: scsi host1: storvsc_host_t Sep 13 01:32:56.252061 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 01:32:56.252071 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 01:32:56.259613 kernel: scsi host0: storvsc_host_t Sep 13 01:32:56.267671 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 13 01:32:56.277253 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 01:32:56.284077 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 01:32:56.292937 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 01:32:56.318293 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 01:32:56.335043 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:32:56.335058 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 01:32:56.347836 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 01:32:56.347945 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:32:56.348023 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 01:32:56.348102 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 01:32:56.348179 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 01:32:56.348288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:56.348301 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:32:56.385283 kernel: hv_netvsc 002248c1-6dd2-0022-48c1-6dd2002248c1 eth0: VF slot 1 added Sep 13 01:32:56.393289 kernel: hv_vmbus: registering driver hv_pci Sep 13 01:32:56.403323 kernel: hv_pci f7aee5b5-ad10-4b95-9b82-9ea5dd86ab5b: PCI VMBus probing: Using version 0x10004 Sep 13 01:32:56.690249 kernel: hv_pci f7aee5b5-ad10-4b95-9b82-9ea5dd86ab5b: PCI host bridge to bus ad10:00 Sep 13 01:32:56.690407 kernel: pci_bus ad10:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 01:32:56.690503 kernel: pci_bus ad10:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 01:32:56.690576 kernel: pci ad10:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 01:32:56.690675 kernel: pci ad10:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:56.690752 kernel: pci ad10:00:02.0: enabling Extended Tags Sep 13 01:32:56.690829 kernel: pci ad10:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ad10:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 01:32:56.690907 kernel: pci_bus ad10:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 01:32:56.690979 kernel: pci ad10:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:56.728727 kernel: mlx5_core ad10:00:02.0: enabling device (0000 -> 0002) Sep 13 01:32:57.050085 kernel: mlx5_core ad10:00:02.0: firmware version: 16.31.2424 Sep 13 01:32:57.050200 kernel: mlx5_core ad10:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 01:32:57.050298 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (538) Sep 13 01:32:57.050308 kernel: hv_netvsc 002248c1-6dd2-0022-48c1-6dd2002248c1 eth0: VF registering: eth1 Sep 13 01:32:57.050399 kernel: mlx5_core ad10:00:02.0 eth1: joined to eth0 Sep 13 01:32:56.990191 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:32:57.019938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:32:57.069277 kernel: mlx5_core ad10:00:02.0 enP44304s1: renamed from eth1 Sep 13 01:32:57.241790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:32:57.250806 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:32:57.262372 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:32:57.277057 systemd[1]: Starting disk-uuid.service... Sep 13 01:32:57.299295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:57.309286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:57.320424 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:58.320956 disk-uuid[605]: The operation has completed successfully. Sep 13 01:32:58.326201 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:58.388024 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:32:58.390400 systemd[1]: Finished disk-uuid.service. Sep 13 01:32:58.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:58.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:58.402117 systemd[1]: Starting verity-setup.service... Sep 13 01:32:58.443292 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 01:32:58.988296 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:32:58.993661 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:32:59.002929 systemd[1]: Finished verity-setup.service. Sep 13 01:32:59.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.017494 kernel: kauditd_printk_skb: 9 callbacks suppressed Sep 13 01:32:59.017536 kernel: audit: type=1130 audit(1757727179.008:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.085295 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:32:59.085062 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:32:59.089327 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 01:32:59.090092 systemd[1]: Starting ignition-setup.service... Sep 13 01:32:59.106010 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:32:59.141099 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:32:59.141158 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:32:59.145930 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:32:59.198120 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:32:59.234362 kernel: audit: type=1130 audit(1757727179.202:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.234387 kernel: audit: type=1334 audit(1757727179.206:22): prog-id=9 op=LOAD Sep 13 01:32:59.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.206000 audit: BPF prog-id=9 op=LOAD Sep 13 01:32:59.207894 systemd[1]: Starting systemd-networkd.service... Sep 13 01:32:59.256998 systemd-networkd[869]: lo: Link UP Sep 13 01:32:59.257007 systemd-networkd[869]: lo: Gained carrier Sep 13 01:32:59.289676 kernel: audit: type=1130 audit(1757727179.265:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.258630 systemd-networkd[869]: Enumeration completed Sep 13 01:32:59.258739 systemd[1]: Started systemd-networkd.service. Sep 13 01:32:59.266061 systemd[1]: Reached target network.target. Sep 13 01:32:59.270567 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:32:59.340036 kernel: audit: type=1130 audit(1757727179.310:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.290787 systemd[1]: Starting iscsiuio.service... Sep 13 01:32:59.344124 iscsid[876]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:32:59.344124 iscsid[876]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 01:32:59.344124 iscsid[876]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:32:59.344124 iscsid[876]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:32:59.344124 iscsid[876]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:32:59.344124 iscsid[876]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:32:59.344124 iscsid[876]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:32:59.473755 kernel: audit: type=1130 audit(1757727179.339:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.473783 kernel: audit: type=1130 audit(1757727179.395:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.303071 systemd[1]: Started iscsiuio.service. Sep 13 01:32:59.311878 systemd[1]: Starting iscsid.service... Sep 13 01:32:59.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.336315 systemd[1]: Started iscsid.service. Sep 13 01:32:59.509956 kernel: audit: type=1130 audit(1757727179.486:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.341134 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:32:59.369404 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:32:59.396471 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:32:59.433358 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:32:59.441197 systemd[1]: Reached target remote-fs.target. Sep 13 01:32:59.452911 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:32:59.478919 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:32:59.479425 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:32:59.581283 kernel: mlx5_core ad10:00:02.0 enP44304s1: Link up Sep 13 01:32:59.587278 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:32:59.666289 kernel: hv_netvsc 002248c1-6dd2-0022-48c1-6dd2002248c1 eth0: Data path switched to VF: enP44304s1 Sep 13 01:32:59.672277 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:32:59.672311 systemd-networkd[869]: enP44304s1: Link UP Sep 13 01:32:59.672403 systemd-networkd[869]: eth0: Link UP Sep 13 01:32:59.672527 systemd-networkd[869]: eth0: Gained carrier Sep 13 01:32:59.685739 systemd-networkd[869]: enP44304s1: Gained carrier Sep 13 01:32:59.697352 systemd-networkd[869]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:32:59.864252 systemd[1]: Finished ignition-setup.service. Sep 13 01:32:59.890382 kernel: audit: type=1130 audit(1757727179.868:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:59.869732 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:33:01.234370 systemd-networkd[869]: eth0: Gained IPv6LL Sep 13 01:33:03.658690 ignition[896]: Ignition 2.14.0 Sep 13 01:33:03.658701 ignition[896]: Stage: fetch-offline Sep 13 01:33:03.658751 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:03.658772 ignition[896]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:03.744927 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:03.745083 ignition[896]: parsed url from cmdline: "" Sep 13 01:33:03.745087 ignition[896]: no config URL provided Sep 13 01:33:03.745093 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:33:03.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.799014 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:33:03.840756 kernel: audit: type=1130 audit(1757727183.809:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.745102 ignition[896]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:33:03.811162 systemd[1]: Starting ignition-fetch.service... Sep 13 01:33:03.745107 ignition[896]: failed to fetch config: resource requires networking Sep 13 01:33:03.745394 ignition[896]: Ignition finished successfully Sep 13 01:33:03.839981 ignition[902]: Ignition 2.14.0 Sep 13 01:33:03.839988 ignition[902]: Stage: fetch Sep 13 01:33:03.840088 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:03.840108 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:03.847893 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:03.848024 ignition[902]: parsed url from cmdline: "" Sep 13 01:33:03.848027 ignition[902]: no config URL provided Sep 13 01:33:03.848032 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:33:03.848040 ignition[902]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:33:03.848070 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 01:33:03.924954 ignition[902]: GET result: OK Sep 13 01:33:03.928200 unknown[902]: fetched base config from "system" Sep 13 01:33:03.925023 ignition[902]: config has been read from IMDS userdata Sep 13 01:33:03.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.928208 unknown[902]: fetched base config from "system" Sep 13 01:33:03.925062 ignition[902]: parsing config with SHA512: 44ae2389a417b5dfb7029aef2978b00b9cd7c4d0b14e5faa54f33ebe87d095983092f41393d114ada46c13ed1b176f3bccfeb76976e310bd63028db3b206b423 Sep 13 01:33:03.928214 unknown[902]: fetched user config from "azure" Sep 13 01:33:03.928728 ignition[902]: fetch: fetch complete Sep 13 01:33:03.932523 systemd[1]: Finished ignition-fetch.service. Sep 13 01:33:03.928734 ignition[902]: fetch: fetch passed Sep 13 01:33:03.941927 systemd[1]: Starting ignition-kargs.service... Sep 13 01:33:03.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.928777 ignition[902]: Ignition finished successfully Sep 13 01:33:03.970121 systemd[1]: Finished ignition-kargs.service. Sep 13 01:33:03.961196 ignition[908]: Ignition 2.14.0 Sep 13 01:33:03.975408 systemd[1]: Starting ignition-disks.service... Sep 13 01:33:03.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:03.961203 ignition[908]: Stage: kargs Sep 13 01:33:03.995637 systemd[1]: Finished ignition-disks.service. Sep 13 01:33:03.961326 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:04.000312 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:33:03.961348 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:04.009163 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:03.964990 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:04.016462 systemd[1]: Reached target local-fs.target. Sep 13 01:33:03.967439 ignition[908]: kargs: kargs passed Sep 13 01:33:04.024942 systemd[1]: Reached target sysinit.target. Sep 13 01:33:03.967491 ignition[908]: Ignition finished successfully Sep 13 01:33:04.035059 systemd[1]: Reached target basic.target. Sep 13 01:33:03.986346 ignition[914]: Ignition 2.14.0 Sep 13 01:33:04.044561 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:33:03.986352 ignition[914]: Stage: disks Sep 13 01:33:03.986464 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:03.986481 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:03.990149 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:03.993395 ignition[914]: disks: disks passed Sep 13 01:33:03.993452 ignition[914]: Ignition finished successfully Sep 13 01:33:04.116859 systemd-fsck[922]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 13 01:33:04.124055 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:33:04.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.138421 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 13 01:33:04.138454 kernel: audit: type=1130 audit(1757727184.128:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:04.138392 systemd[1]: Mounting sysroot.mount... Sep 13 01:33:04.187302 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:33:04.187754 systemd[1]: Mounted sysroot.mount. Sep 13 01:33:04.191471 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:33:04.237858 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:33:04.242637 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 01:33:04.250519 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:33:04.250553 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:33:04.256428 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:33:04.333486 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:33:04.338477 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:33:04.364851 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:33:04.371917 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Sep 13 01:33:04.371939 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:33:04.382940 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:33:04.382957 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:33:04.394819 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:33:04.408654 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:33:04.431723 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:33:04.458987 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:33:05.203060 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:33:05.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.227563 systemd[1]: Starting ignition-mount.service... Sep 13 01:33:05.239436 kernel: audit: type=1130 audit(1757727185.207:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.233798 systemd[1]: Starting sysroot-boot.service... Sep 13 01:33:05.244197 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:05.244334 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:05.269470 systemd[1]: Finished sysroot-boot.service. Sep 13 01:33:05.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.294283 kernel: audit: type=1130 audit(1757727185.273:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.323382 ignition[1002]: INFO : Ignition 2.14.0 Sep 13 01:33:05.323382 ignition[1002]: INFO : Stage: mount Sep 13 01:33:05.333053 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:05.333053 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:05.333053 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:05.333053 ignition[1002]: INFO : mount: mount passed Sep 13 01:33:05.333053 ignition[1002]: INFO : Ignition finished successfully Sep 13 01:33:05.389680 kernel: audit: type=1130 audit(1757727185.344:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:05.333949 systemd[1]: Finished ignition-mount.service. Sep 13 01:33:05.994577 coreos-metadata[932]: Sep 13 01:33:05.994 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 01:33:06.004507 coreos-metadata[932]: Sep 13 01:33:06.004 INFO Fetch successful Sep 13 01:33:06.038679 coreos-metadata[932]: Sep 13 01:33:06.038 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 01:33:06.061486 coreos-metadata[932]: Sep 13 01:33:06.061 INFO Fetch successful Sep 13 01:33:06.080360 coreos-metadata[932]: Sep 13 01:33:06.080 INFO wrote hostname ci-3510.3.8-n-49eff79a60 to /sysroot/etc/hostname Sep 13 01:33:06.089373 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 01:33:06.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:06.095330 systemd[1]: Starting ignition-files.service... Sep 13 01:33:06.122432 kernel: audit: type=1130 audit(1757727186.094:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:06.121367 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:33:06.144277 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1011) Sep 13 01:33:06.157207 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:33:06.157245 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:33:06.157275 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:33:06.170835 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:33:06.184518 ignition[1030]: INFO : Ignition 2.14.0 Sep 13 01:33:06.184518 ignition[1030]: INFO : Stage: files Sep 13 01:33:06.195409 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:06.195409 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:06.195409 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:06.195409 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:33:06.228415 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:33:06.228415 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:33:06.317181 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:33:06.325037 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:33:06.347895 unknown[1030]: wrote ssh authorized keys file for user: core Sep 13 01:33:06.353628 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:33:06.362419 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 01:33:06.373971 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 01:33:06.708387 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:33:06.798455 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 01:33:06.822038 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:33:06.832039 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 01:33:06.875615 kernel: mlx5_core ad10:00:02.0: poll_health:739:(pid 0): device's health compromised - reached miss count Sep 13 01:33:06.926279 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:33:06.999893 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:33:07.010099 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3909015270" Sep 13 01:33:07.088963 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3909015270": device or resource busy Sep 13 01:33:07.088963 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3909015270", trying btrfs: device or resource busy Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3909015270" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3909015270" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3909015270" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3909015270" Sep 13 01:33:07.088963 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem982245381" Sep 13 01:33:07.250053 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem982245381": device or resource busy Sep 13 01:33:07.250053 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem982245381", trying btrfs: device or resource busy Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem982245381" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem982245381" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem982245381" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem982245381" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.250053 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 01:33:07.090121 systemd[1]: mnt-oem3909015270.mount: Deactivated successfully. Sep 13 01:33:07.593415 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 13 01:33:07.819626 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 01:33:07.819626 ignition[1030]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 13 01:33:07.819626 ignition[1030]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 13 01:33:07.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:33:07.864341 ignition[1030]: INFO : files: files passed Sep 13 01:33:07.864341 ignition[1030]: INFO : Ignition finished successfully Sep 13 01:33:08.109384 kernel: audit: type=1130 audit(1757727187.843:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.109409 kernel: audit: type=1130 audit(1757727187.913:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.109419 kernel: audit: type=1131 audit(1757727187.913:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.109429 kernel: audit: type=1130 audit(1757727187.963:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.109438 kernel: audit: type=1130 audit(1757727188.040:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:07.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:07.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:07.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:07.833031 systemd[1]: Finished ignition-files.service. Sep 13 01:33:07.867881 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:33:07.885560 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:33:08.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.142106 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:33:07.892388 systemd[1]: Starting ignition-quench.service... Sep 13 01:33:07.909594 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:33:07.909699 systemd[1]: Finished ignition-quench.service. Sep 13 01:33:07.914785 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:33:07.963539 systemd[1]: Reached target ignition-complete.target. Sep 13 01:33:08.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:07.999734 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:33:08.029580 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:33:08.029677 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:33:08.066903 systemd[1]: Reached target initrd-fs.target. Sep 13 01:33:08.082057 systemd[1]: Reached target initrd.target. Sep 13 01:33:08.098711 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:33:08.103027 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:33:08.126162 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:33:08.138324 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:33:08.159429 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:33:08.165184 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:33:08.175706 systemd[1]: Stopped target timers.target. Sep 13 01:33:08.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.184180 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:33:08.184248 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:33:08.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.194280 systemd[1]: Stopped target initrd.target. Sep 13 01:33:08.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.203156 systemd[1]: Stopped target basic.target. Sep 13 01:33:08.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.211567 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:33:08.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.222447 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:33:08.373728 iscsid[876]: iscsid shutting down. Sep 13 01:33:08.232115 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:33:08.391507 ignition[1068]: INFO : Ignition 2.14.0 Sep 13 01:33:08.391507 ignition[1068]: INFO : Stage: umount Sep 13 01:33:08.391507 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:33:08.391507 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:33:08.391507 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:33:08.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.241544 systemd[1]: Stopped target remote-fs.target. Sep 13 01:33:08.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.451792 ignition[1068]: INFO : umount: umount passed Sep 13 01:33:08.451792 ignition[1068]: INFO : Ignition finished successfully Sep 13 01:33:08.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.249970 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:33:08.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.261840 systemd[1]: Stopped target sysinit.target. Sep 13 01:33:08.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.270764 systemd[1]: Stopped target local-fs.target. Sep 13 01:33:08.279635 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:33:08.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.288541 systemd[1]: Stopped target swap.target. Sep 13 01:33:08.298359 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:33:08.298430 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:33:08.307545 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:33:08.315967 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:33:08.316020 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:33:08.325809 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:33:08.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.325849 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:33:08.335558 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:33:08.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.335597 systemd[1]: Stopped ignition-files.service. Sep 13 01:33:08.344106 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 01:33:08.344155 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 01:33:08.354085 systemd[1]: Stopping ignition-mount.service... Sep 13 01:33:08.363033 systemd[1]: Stopping iscsid.service... Sep 13 01:33:08.372822 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:33:08.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.383760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:33:08.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.383843 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:33:08.643000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:33:08.396681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:33:08.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.396751 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:33:08.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.406665 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:33:08.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.406780 systemd[1]: Stopped iscsid.service. Sep 13 01:33:08.426560 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:33:08.426989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:33:08.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.427075 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:33:08.447937 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:33:08.448017 systemd[1]: Stopped ignition-mount.service. Sep 13 01:33:08.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.456665 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:33:08.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.456722 systemd[1]: Stopped ignition-disks.service. Sep 13 01:33:08.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.464898 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:33:08.464943 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:33:08.475562 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:33:08.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.475599 systemd[1]: Stopped ignition-fetch.service. Sep 13 01:33:08.484220 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:33:08.808324 kernel: hv_netvsc 002248c1-6dd2-0022-48c1-6dd2002248c1 eth0: Data path switched from VF: enP44304s1 Sep 13 01:33:08.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.484272 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:33:08.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.496994 systemd[1]: Stopped target paths.target. Sep 13 01:33:08.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.505405 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:33:08.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.514285 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:33:08.520228 systemd[1]: Stopped target slices.target. Sep 13 01:33:08.528280 systemd[1]: Stopped target sockets.target. Sep 13 01:33:08.537555 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:33:08.537608 systemd[1]: Closed iscsid.socket. Sep 13 01:33:08.545861 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:33:08.545906 systemd[1]: Stopped ignition-setup.service. Sep 13 01:33:08.555023 systemd[1]: Stopping iscsiuio.service... Sep 13 01:33:08.565933 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:33:08.566034 systemd[1]: Stopped iscsiuio.service. Sep 13 01:33:08.574533 systemd[1]: Stopped target network.target. Sep 13 01:33:08.584239 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:33:08.584335 systemd[1]: Closed iscsiuio.socket. Sep 13 01:33:08.592013 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:08.602365 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:33:08.614866 systemd-networkd[869]: eth0: DHCPv6 lease lost Sep 13 01:33:08.898000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:33:08.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.616563 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:08.616668 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:08.624799 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:33:08.624890 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:33:08.634186 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:33:08.634221 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:33:08.644343 systemd[1]: Stopping network-cleanup.service... Sep 13 01:33:08.651830 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:33:08.651889 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:33:08.656993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:33:08.972554 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Sep 13 01:33:08.657040 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:33:08.670970 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:33:08.671015 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:33:08.676511 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:33:08.685815 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:33:08.694855 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:33:08.695000 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:33:08.699595 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:33:08.699638 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:33:08.708975 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:33:08.709017 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:33:08.717545 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:33:08.717597 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:33:08.727816 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:33:08.727859 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:33:08.736665 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:33:08.736710 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:33:08.750270 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:33:08.761383 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 01:33:08.761467 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 01:33:08.779375 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:33:08.779467 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:33:08.794753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:33:08.794822 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:33:08.806096 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 01:33:08.806580 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:33:08.806676 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:33:08.811026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:33:08.811103 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:33:08.820711 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:33:08.820766 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:33:08.895070 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:33:08.895164 systemd[1]: Stopped network-cleanup.service. Sep 13 01:33:08.903362 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:33:08.913026 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:33:08.934082 systemd[1]: Switching root. Sep 13 01:33:08.973479 systemd-journald[276]: Journal stopped Sep 13 01:33:25.618900 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:33:25.618921 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:33:25.618931 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:33:25.618941 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:33:25.618949 kernel: SELinux: policy capability open_perms=1 Sep 13 01:33:25.618957 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:33:25.618966 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:33:25.618974 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:33:25.618982 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:33:25.618990 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:33:25.618998 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:33:25.619007 kernel: kauditd_printk_skb: 39 callbacks suppressed Sep 13 01:33:25.619016 kernel: audit: type=1403 audit(1757727191.697:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:33:25.619026 systemd[1]: Successfully loaded SELinux policy in 321.316ms. Sep 13 01:33:25.619037 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.696ms. Sep 13 01:33:25.619050 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:33:25.619060 systemd[1]: Detected virtualization microsoft. Sep 13 01:33:25.619069 systemd[1]: Detected architecture arm64. Sep 13 01:33:25.619078 systemd[1]: Detected first boot. Sep 13 01:33:25.619087 systemd[1]: Hostname set to . Sep 13 01:33:25.619097 systemd[1]: Initializing machine ID from random generator. Sep 13 01:33:25.619106 kernel: audit: type=1400 audit(1757727192.823:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:25.619117 kernel: audit: type=1400 audit(1757727192.823:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:25.619125 kernel: audit: type=1334 audit(1757727192.840:85): prog-id=10 op=LOAD Sep 13 01:33:25.619134 kernel: audit: type=1334 audit(1757727192.840:86): prog-id=10 op=UNLOAD Sep 13 01:33:25.619143 kernel: audit: type=1334 audit(1757727192.857:87): prog-id=11 op=LOAD Sep 13 01:33:25.619152 kernel: audit: type=1334 audit(1757727192.857:88): prog-id=11 op=UNLOAD Sep 13 01:33:25.619160 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:33:25.619170 kernel: audit: type=1400 audit(1757727194.437:89): avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:33:25.619181 kernel: audit: type=1300 audit(1757727194.437:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022804 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:25.619190 kernel: audit: type=1327 audit(1757727194.437:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:25.619199 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:33:25.619208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:25.619218 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:25.619228 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:25.619238 kernel: kauditd_printk_skb: 6 callbacks suppressed Sep 13 01:33:25.619247 kernel: audit: type=1334 audit(1757727204.877:91): prog-id=12 op=LOAD Sep 13 01:33:25.619255 kernel: audit: type=1334 audit(1757727204.877:92): prog-id=3 op=UNLOAD Sep 13 01:33:25.619275 kernel: audit: type=1334 audit(1757727204.877:93): prog-id=13 op=LOAD Sep 13 01:33:25.619285 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:33:25.619296 kernel: audit: type=1334 audit(1757727204.877:94): prog-id=14 op=LOAD Sep 13 01:33:25.619305 systemd[1]: Stopped initrd-switch-root.service. Sep 13 01:33:25.619316 kernel: audit: type=1334 audit(1757727204.877:95): prog-id=4 op=UNLOAD Sep 13 01:33:25.619326 kernel: audit: type=1334 audit(1757727204.877:96): prog-id=5 op=UNLOAD Sep 13 01:33:25.619335 kernel: audit: type=1334 audit(1757727204.883:97): prog-id=15 op=LOAD Sep 13 01:33:25.619344 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:33:25.619353 kernel: audit: type=1334 audit(1757727204.883:98): prog-id=12 op=UNLOAD Sep 13 01:33:25.619362 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:33:25.619371 kernel: audit: type=1334 audit(1757727204.888:99): prog-id=16 op=LOAD Sep 13 01:33:25.619380 kernel: audit: type=1334 audit(1757727204.894:100): prog-id=17 op=LOAD Sep 13 01:33:25.619389 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:33:25.619399 systemd[1]: Created slice system-getty.slice. Sep 13 01:33:25.619409 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:33:25.619418 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:33:25.619428 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:33:25.619437 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:33:25.619446 systemd[1]: Created slice user.slice. Sep 13 01:33:25.619456 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:33:25.619465 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:33:25.619475 systemd[1]: Set up automount boot.automount. Sep 13 01:33:25.619486 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:33:25.619496 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 01:33:25.619505 systemd[1]: Stopped target initrd-fs.target. Sep 13 01:33:25.619514 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 01:33:25.619523 systemd[1]: Reached target integritysetup.target. Sep 13 01:33:25.619532 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:33:25.619542 systemd[1]: Reached target remote-fs.target. Sep 13 01:33:25.619551 systemd[1]: Reached target slices.target. Sep 13 01:33:25.619562 systemd[1]: Reached target swap.target. Sep 13 01:33:25.619571 systemd[1]: Reached target torcx.target. Sep 13 01:33:25.619580 systemd[1]: Reached target veritysetup.target. Sep 13 01:33:25.619589 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:33:25.619598 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:33:25.619608 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:33:25.619619 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:33:25.619628 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:33:25.619638 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:33:25.619647 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:33:25.619657 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:33:25.619667 systemd[1]: Mounting media.mount... Sep 13 01:33:25.619677 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:33:25.619686 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:33:25.619697 systemd[1]: Mounting tmp.mount... Sep 13 01:33:25.619706 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:33:25.619716 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:25.619725 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:33:25.619735 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:33:25.619744 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:25.619754 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:25.619764 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:25.619773 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:33:25.619784 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:25.619794 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:33:25.619803 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:33:25.619812 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 01:33:25.619822 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:33:25.619831 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:33:25.619842 systemd[1]: Stopped systemd-journald.service. Sep 13 01:33:25.619852 systemd[1]: systemd-journald.service: Consumed 3.035s CPU time. Sep 13 01:33:25.619863 systemd[1]: Starting systemd-journald.service... Sep 13 01:33:25.619873 kernel: loop: module loaded Sep 13 01:33:25.619882 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:33:25.619891 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:33:25.619901 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:33:25.619910 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:33:25.619920 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:33:25.619929 systemd[1]: Stopped verity-setup.service. Sep 13 01:33:25.619938 kernel: fuse: init (API version 7.34) Sep 13 01:33:25.619949 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:33:25.619958 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:33:25.619971 systemd-journald[1203]: Journal started Sep 13 01:33:25.620006 systemd-journald[1203]: Runtime Journal (/run/log/journal/7ee9a5fc70e445019d7bd903372eaf05) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:33:11.697000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:33:12.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:12.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:12.840000 audit: BPF prog-id=10 op=LOAD Sep 13 01:33:12.840000 audit: BPF prog-id=10 op=UNLOAD Sep 13 01:33:12.857000 audit: BPF prog-id=11 op=LOAD Sep 13 01:33:12.857000 audit: BPF prog-id=11 op=UNLOAD Sep 13 01:33:14.437000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:33:14.437000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022804 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:14.437000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:14.446000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:33:14.446000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:14.446000 audit: CWD cwd="/" Sep 13 01:33:14.446000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:14.446000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:14.446000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:24.877000 audit: BPF prog-id=12 op=LOAD Sep 13 01:33:24.877000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:33:24.877000 audit: BPF prog-id=13 op=LOAD Sep 13 01:33:24.877000 audit: BPF prog-id=14 op=LOAD Sep 13 01:33:24.877000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:33:24.877000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:33:24.883000 audit: BPF prog-id=15 op=LOAD Sep 13 01:33:24.883000 audit: BPF prog-id=12 op=UNLOAD Sep 13 01:33:24.888000 audit: BPF prog-id=16 op=LOAD Sep 13 01:33:24.894000 audit: BPF prog-id=17 op=LOAD Sep 13 01:33:24.894000 audit: BPF prog-id=13 op=UNLOAD Sep 13 01:33:24.894000 audit: BPF prog-id=14 op=UNLOAD Sep 13 01:33:24.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:24.928000 audit: BPF prog-id=15 op=UNLOAD Sep 13 01:33:24.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:24.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.489000 audit: BPF prog-id=18 op=LOAD Sep 13 01:33:25.489000 audit: BPF prog-id=19 op=LOAD Sep 13 01:33:25.489000 audit: BPF prog-id=20 op=LOAD Sep 13 01:33:25.489000 audit: BPF prog-id=16 op=UNLOAD Sep 13 01:33:25.489000 audit: BPF prog-id=17 op=UNLOAD Sep 13 01:33:25.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.616000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:33:25.616000 audit[1203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe2e35ff0 a2=4000 a3=1 items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:25.616000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:33:24.876642 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:33:14.324529 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:33:24.876655 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:33:14.353513 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:33:24.895621 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:33:14.353533 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:33:24.895993 systemd[1]: systemd-journald.service: Consumed 3.035s CPU time. Sep 13 01:33:14.353572 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 01:33:14.353582 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 01:33:14.353621 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 01:33:14.353633 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 01:33:14.353832 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 01:33:14.353867 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:33:14.353879 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:33:14.397757 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 01:33:14.397809 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 01:33:14.397835 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 01:33:14.397849 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 01:33:14.397870 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 01:33:14.397883 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 01:33:20.785084 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:20.785388 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:20.785486 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:20.785642 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:20.785691 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 01:33:20.785751 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-09-13T01:33:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 01:33:25.629734 systemd[1]: Started systemd-journald.service. Sep 13 01:33:25.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.630504 systemd[1]: Mounted media.mount. Sep 13 01:33:25.634211 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:33:25.638823 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:33:25.643729 systemd[1]: Mounted tmp.mount. Sep 13 01:33:25.647564 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:33:25.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.652466 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:33:25.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.657182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:33:25.657467 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:33:25.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.662508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:25.662633 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:25.667729 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:25.667860 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:25.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.672432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:25.672552 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:25.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.677648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:33:25.677773 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:33:25.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.682488 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:25.682607 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:25.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.687147 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:33:25.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.692493 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:33:25.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.697855 systemd[1]: Reached target network-pre.target. Sep 13 01:33:25.703444 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:33:25.708819 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:33:25.712997 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:33:25.758819 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:33:25.764246 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:33:25.768517 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:25.769712 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:33:25.774002 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:25.775192 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:33:25.782038 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:33:25.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.787089 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:33:25.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.792584 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:33:25.797520 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:33:25.803121 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:33:25.808422 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:33:25.815850 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 01:33:25.847913 systemd-journald[1203]: Time spent on flushing to /var/log/journal/7ee9a5fc70e445019d7bd903372eaf05 is 14.100ms for 1112 entries. Sep 13 01:33:25.847913 systemd-journald[1203]: System Journal (/var/log/journal/7ee9a5fc70e445019d7bd903372eaf05) is 8.0M, max 2.6G, 2.6G free. Sep 13 01:33:25.965884 systemd-journald[1203]: Received client request to flush runtime journal. Sep 13 01:33:25.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:25.867180 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:33:25.872585 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:33:25.966944 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:33:25.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:26.025599 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:33:26.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:26.738465 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:33:26.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:26.744455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:33:27.512379 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:33:27.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.819528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:33:27.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:27.825000 audit: BPF prog-id=21 op=LOAD Sep 13 01:33:27.825000 audit: BPF prog-id=22 op=LOAD Sep 13 01:33:27.825000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:33:27.825000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:33:27.826106 systemd[1]: Starting systemd-udevd.service... Sep 13 01:33:27.844021 systemd-udevd[1226]: Using default interface naming scheme 'v252'. Sep 13 01:33:28.985971 systemd[1]: Started systemd-udevd.service. Sep 13 01:33:28.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:28.995000 audit: BPF prog-id=23 op=LOAD Sep 13 01:33:28.998190 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:29.021921 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 13 01:33:29.099000 audit: BPF prog-id=24 op=LOAD Sep 13 01:33:29.099000 audit: BPF prog-id=25 op=LOAD Sep 13 01:33:29.099000 audit: BPF prog-id=26 op=LOAD Sep 13 01:33:29.100726 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:33:29.113000 audit[1235]: AVC avc: denied { confidentiality } for pid=1235 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:33:29.120290 kernel: hv_vmbus: registering driver hv_balloon Sep 13 01:33:29.120384 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 01:33:29.130542 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 13 01:33:29.139309 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 01:33:29.151548 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 01:33:29.151645 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 01:33:29.161090 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:33:29.161186 kernel: Console: switching to colour dummy device 80x25 Sep 13 01:33:29.163279 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:33:29.113000 audit[1235]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf5b92290 a1=aa2c a2=ffff91f824b0 a3=aaaaf5af0010 items=12 ppid=1226 pid=1235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:29.113000 audit: CWD cwd="/" Sep 13 01:33:29.113000 audit: PATH item=0 name=(null) inode=6963 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=1 name=(null) inode=10758 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=2 name=(null) inode=10758 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=3 name=(null) inode=10759 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=4 name=(null) inode=10758 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=5 name=(null) inode=10760 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=6 name=(null) inode=10758 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=7 name=(null) inode=10761 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=8 name=(null) inode=10758 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=9 name=(null) inode=10762 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=10 name=(null) inode=10758 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PATH item=11 name=(null) inode=10763 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:29.113000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:33:29.188004 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 01:33:29.188103 kernel: hv_vmbus: registering driver hv_utils Sep 13 01:33:29.198364 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 01:33:29.198452 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 01:33:29.198484 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 01:33:29.199422 systemd[1]: Started systemd-userdbd.service. Sep 13 01:33:29.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.439603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:33:29.448617 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:33:29.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.454860 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:33:29.786171 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:29.868225 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:33:29.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.873240 systemd[1]: Reached target cryptsetup.target. Sep 13 01:33:29.876899 kernel: kauditd_printk_skb: 71 callbacks suppressed Sep 13 01:33:29.876946 kernel: audit: type=1130 audit(1757727209.872:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.899399 systemd-networkd[1247]: lo: Link UP Sep 13 01:33:29.899407 systemd-networkd[1247]: lo: Gained carrier Sep 13 01:33:29.899545 systemd[1]: Starting lvm2-activation.service... Sep 13 01:33:29.899850 systemd-networkd[1247]: Enumeration completed Sep 13 01:33:29.903549 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:29.904714 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:29.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.925281 kernel: audit: type=1130 audit(1757727209.907:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.926107 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:29.942004 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:29.943172 systemd[1]: Finished lvm2-activation.service. Sep 13 01:33:29.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.947961 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:29.965264 kernel: audit: type=1130 audit(1757727209.947:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:29.968750 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:33:29.968780 systemd[1]: Reached target local-fs.target. Sep 13 01:33:29.973499 systemd[1]: Reached target machines.target. Sep 13 01:33:29.979466 systemd[1]: Starting ldconfig.service... Sep 13 01:33:30.013942 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.014012 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:30.015191 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:33:30.020582 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:33:30.027086 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:33:30.035483 systemd[1]: Starting systemd-sysext.service... Sep 13 01:33:30.037262 kernel: mlx5_core ad10:00:02.0 enP44304s1: Link up Sep 13 01:33:30.044260 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:33:30.089274 kernel: hv_netvsc 002248c1-6dd2-0022-48c1-6dd2002248c1 eth0: Data path switched to VF: enP44304s1 Sep 13 01:33:30.093041 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1306 (bootctl) Sep 13 01:33:30.094588 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:33:30.099584 systemd-networkd[1247]: enP44304s1: Link UP Sep 13 01:33:30.100347 systemd-networkd[1247]: eth0: Link UP Sep 13 01:33:30.100511 systemd-networkd[1247]: eth0: Gained carrier Sep 13 01:33:30.107983 systemd-networkd[1247]: enP44304s1: Gained carrier Sep 13 01:33:30.116393 systemd-networkd[1247]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:30.169583 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:33:30.174532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:33:30.175301 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:33:30.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.184177 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:33:30.204365 kernel: audit: type=1130 audit(1757727210.180:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.223278 kernel: audit: type=1130 audit(1757727210.204:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.243735 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:30.243931 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:33:30.311278 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 01:33:30.375272 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:33:30.396288 kernel: loop1: detected capacity change from 0 to 203944 Sep 13 01:33:30.408391 (sd-sysext)[1319]: Using extensions 'kubernetes'. Sep 13 01:33:30.409138 (sd-sysext)[1319]: Merged extensions into '/usr'. Sep 13 01:33:30.425885 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:33:30.429685 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.430962 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:30.435953 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:30.442567 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:30.446359 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.446511 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:30.449132 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:33:30.453725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:30.453888 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:30.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.463693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:30.463825 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:30.479269 kernel: audit: type=1130 audit(1757727210.458:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.479373 kernel: audit: type=1131 audit(1757727210.463:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.494116 systemd-fsck[1315]: fsck.fat 4.2 (2021-01-31) Sep 13 01:33:30.494116 systemd-fsck[1315]: /dev/sda1: 236 files, 117310/258078 clusters Sep 13 01:33:30.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.501835 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:33:30.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.542655 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:30.542926 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:30.543187 kernel: audit: type=1130 audit(1757727210.500:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.543293 kernel: audit: type=1131 audit(1757727210.500:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.567960 systemd[1]: Finished systemd-sysext.service. Sep 13 01:33:30.568165 kernel: audit: type=1130 audit(1757727210.519:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.575814 systemd[1]: Mounting boot.mount... Sep 13 01:33:30.582961 systemd[1]: Starting ensure-sysext.service... Sep 13 01:33:30.587129 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:30.587210 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.588364 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:33:30.606432 systemd[1]: Reloading. Sep 13 01:33:30.632040 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:33:30.668016 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2025-09-13T01:33:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:33:30.668047 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2025-09-13T01:33:30Z" level=info msg="torcx already run" Sep 13 01:33:30.736438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:30.736457 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:30.753115 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:30.822000 audit: BPF prog-id=27 op=LOAD Sep 13 01:33:30.822000 audit: BPF prog-id=18 op=UNLOAD Sep 13 01:33:30.822000 audit: BPF prog-id=28 op=LOAD Sep 13 01:33:30.822000 audit: BPF prog-id=29 op=LOAD Sep 13 01:33:30.822000 audit: BPF prog-id=19 op=UNLOAD Sep 13 01:33:30.822000 audit: BPF prog-id=20 op=UNLOAD Sep 13 01:33:30.822000 audit: BPF prog-id=30 op=LOAD Sep 13 01:33:30.822000 audit: BPF prog-id=31 op=LOAD Sep 13 01:33:30.822000 audit: BPF prog-id=21 op=UNLOAD Sep 13 01:33:30.822000 audit: BPF prog-id=22 op=UNLOAD Sep 13 01:33:30.824000 audit: BPF prog-id=32 op=LOAD Sep 13 01:33:30.824000 audit: BPF prog-id=24 op=UNLOAD Sep 13 01:33:30.824000 audit: BPF prog-id=33 op=LOAD Sep 13 01:33:30.824000 audit: BPF prog-id=34 op=LOAD Sep 13 01:33:30.824000 audit: BPF prog-id=25 op=UNLOAD Sep 13 01:33:30.824000 audit: BPF prog-id=26 op=UNLOAD Sep 13 01:33:30.826000 audit: BPF prog-id=35 op=LOAD Sep 13 01:33:30.826000 audit: BPF prog-id=23 op=UNLOAD Sep 13 01:33:30.829715 systemd[1]: Mounted boot.mount. Sep 13 01:33:30.842835 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.844660 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:30.850233 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:30.850283 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:33:30.856742 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:30.860536 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.860667 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:30.861617 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:33:30.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.866774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:30.866895 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:30.867683 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:33:30.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.871842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:30.871968 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:30.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.877348 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:30.877469 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:30.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.883781 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.885134 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:30.890783 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:30.896328 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:30.900313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.900449 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:30.901271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:30.901414 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:30.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.907346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:30.907475 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:30.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.913375 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:30.913497 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:30.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.918597 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:30.918690 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.920970 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.922342 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:30.927608 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:30.932618 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:30.938241 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:30.942305 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.942440 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:30.943401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:30.943538 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:30.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.948682 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:30.948804 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:30.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.953836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:30.953956 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:30.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.959485 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:30.959604 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:30.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:30.964693 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:30.964764 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:30.965972 systemd[1]: Finished ensure-sysext.service. Sep 13 01:33:30.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:31.431472 systemd-networkd[1247]: eth0: Gained IPv6LL Sep 13 01:33:31.437229 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:31.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.292956 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:33:34.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.300124 systemd[1]: Starting audit-rules.service... Sep 13 01:33:34.305016 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:33:34.310465 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:33:34.315000 audit: BPF prog-id=36 op=LOAD Sep 13 01:33:34.317011 systemd[1]: Starting systemd-resolved.service... Sep 13 01:33:34.321000 audit: BPF prog-id=37 op=LOAD Sep 13 01:33:34.322521 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:33:34.328807 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:33:34.380000 audit[1428]: SYSTEM_BOOT pid=1428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.383480 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:33:34.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.401445 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:33:34.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.406518 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:33:34.449551 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:33:34.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.454562 systemd[1]: Reached target time-set.target. Sep 13 01:33:34.502229 systemd-resolved[1425]: Positive Trust Anchors: Sep 13 01:33:34.502547 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:33:34.502629 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:33:34.595290 systemd-resolved[1425]: Using system hostname 'ci-3510.3.8-n-49eff79a60'. Sep 13 01:33:34.597081 systemd[1]: Started systemd-resolved.service. Sep 13 01:33:34.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.601885 systemd[1]: Reached target network.target. Sep 13 01:33:34.606192 systemd[1]: Reached target network-online.target. Sep 13 01:33:34.610798 systemd[1]: Reached target nss-lookup.target. Sep 13 01:33:34.705456 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:33:34.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:34.844000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:33:34.844000 audit[1443]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc4d0d90 a2=420 a3=0 items=0 ppid=1422 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:34.844000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:33:34.845591 augenrules[1443]: No rules Sep 13 01:33:34.846554 systemd[1]: Finished audit-rules.service. Sep 13 01:33:34.980192 systemd-timesyncd[1426]: Contacted time server 66.85.78.80:123 (0.flatcar.pool.ntp.org). Sep 13 01:33:34.980618 systemd-timesyncd[1426]: Initial clock synchronization to Sat 2025-09-13 01:33:34.987717 UTC. Sep 13 01:33:42.217170 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:33:42.228656 systemd[1]: Finished ldconfig.service. Sep 13 01:33:42.234967 systemd[1]: Starting systemd-update-done.service... Sep 13 01:33:42.287876 systemd[1]: Finished systemd-update-done.service. Sep 13 01:33:42.292637 systemd[1]: Reached target sysinit.target. Sep 13 01:33:42.297085 systemd[1]: Started motdgen.path. Sep 13 01:33:42.300969 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:33:42.307164 systemd[1]: Started logrotate.timer. Sep 13 01:33:42.311092 systemd[1]: Started mdadm.timer. Sep 13 01:33:42.314775 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:33:42.319337 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:33:42.319370 systemd[1]: Reached target paths.target. Sep 13 01:33:42.323401 systemd[1]: Reached target timers.target. Sep 13 01:33:42.328498 systemd[1]: Listening on dbus.socket. Sep 13 01:33:42.333416 systemd[1]: Starting docker.socket... Sep 13 01:33:42.368616 systemd[1]: Listening on sshd.socket. Sep 13 01:33:42.372711 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:42.373190 systemd[1]: Listening on docker.socket. Sep 13 01:33:42.377334 systemd[1]: Reached target sockets.target. Sep 13 01:33:42.381663 systemd[1]: Reached target basic.target. Sep 13 01:33:42.386004 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:42.386041 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:42.387197 systemd[1]: Starting containerd.service... Sep 13 01:33:42.391904 systemd[1]: Starting dbus.service... Sep 13 01:33:42.396192 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:33:42.401388 systemd[1]: Starting extend-filesystems.service... Sep 13 01:33:42.405670 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:33:42.423567 systemd[1]: Starting kubelet.service... Sep 13 01:33:42.428308 systemd[1]: Starting motdgen.service... Sep 13 01:33:42.432864 systemd[1]: Started nvidia.service. Sep 13 01:33:42.438214 systemd[1]: Starting prepare-helm.service... Sep 13 01:33:42.443222 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:33:42.448519 systemd[1]: Starting sshd-keygen.service... Sep 13 01:33:42.454369 systemd[1]: Starting systemd-logind.service... Sep 13 01:33:42.458334 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:42.458407 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:33:42.458806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:33:42.459482 systemd[1]: Starting update-engine.service... Sep 13 01:33:42.464444 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:33:42.513700 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:33:42.513869 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:33:42.546991 extend-filesystems[1454]: Found loop1 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda1 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda2 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda3 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found usr Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda4 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda6 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda7 Sep 13 01:33:42.552682 extend-filesystems[1454]: Found sda9 Sep 13 01:33:42.552682 extend-filesystems[1454]: Checking size of /dev/sda9 Sep 13 01:33:42.597550 jq[1465]: true Sep 13 01:33:42.600744 jq[1453]: false Sep 13 01:33:42.554460 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:33:42.554645 systemd[1]: Finished motdgen.service. Sep 13 01:33:42.573047 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:33:42.573215 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:33:42.618066 jq[1482]: true Sep 13 01:33:42.627988 env[1475]: time="2025-09-13T01:33:42.627889043Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:33:42.632207 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 01:33:42.634602 systemd-logind[1463]: New seat seat0. Sep 13 01:33:42.667515 tar[1468]: linux-arm64/helm Sep 13 01:33:42.694088 extend-filesystems[1454]: Old size kept for /dev/sda9 Sep 13 01:33:42.699495 extend-filesystems[1454]: Found sr0 Sep 13 01:33:42.699460 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:33:42.699643 systemd[1]: Finished extend-filesystems.service. Sep 13 01:33:42.737887 env[1475]: time="2025-09-13T01:33:42.737838479Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:33:42.738038 env[1475]: time="2025-09-13T01:33:42.738015013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:42.740801 env[1475]: time="2025-09-13T01:33:42.740758691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:42.740801 env[1475]: time="2025-09-13T01:33:42.740796222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:42.744014 env[1475]: time="2025-09-13T01:33:42.743973593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:42.744079 env[1475]: time="2025-09-13T01:33:42.744015526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:42.744079 env[1475]: time="2025-09-13T01:33:42.744032051Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:33:42.744079 env[1475]: time="2025-09-13T01:33:42.744042414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:42.744155 env[1475]: time="2025-09-13T01:33:42.744138044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:42.746469 env[1475]: time="2025-09-13T01:33:42.744373676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:42.746638 env[1475]: time="2025-09-13T01:33:42.746609759Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:42.746679 env[1475]: time="2025-09-13T01:33:42.746639768Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:33:42.746728 env[1475]: time="2025-09-13T01:33:42.746707829Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:33:42.746773 env[1475]: time="2025-09-13T01:33:42.746726474Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:33:42.767492 env[1475]: time="2025-09-13T01:33:42.767446846Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:33:42.767492 env[1475]: time="2025-09-13T01:33:42.767494620Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:33:42.767640 env[1475]: time="2025-09-13T01:33:42.767509665Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:33:42.767640 env[1475]: time="2025-09-13T01:33:42.767546476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.767640 env[1475]: time="2025-09-13T01:33:42.767561521Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.767640 env[1475]: time="2025-09-13T01:33:42.767577406Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.767640 env[1475]: time="2025-09-13T01:33:42.767590970Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.767971 env[1475]: time="2025-09-13T01:33:42.767948599Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.768010 env[1475]: time="2025-09-13T01:33:42.767972366Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.768010 env[1475]: time="2025-09-13T01:33:42.767986811Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.768010 env[1475]: time="2025-09-13T01:33:42.767998654Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.768077 env[1475]: time="2025-09-13T01:33:42.768012258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:33:42.768169 env[1475]: time="2025-09-13T01:33:42.768142818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:33:42.768241 env[1475]: time="2025-09-13T01:33:42.768221722Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:33:42.768871 env[1475]: time="2025-09-13T01:33:42.768508490Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:33:42.768871 env[1475]: time="2025-09-13T01:33:42.768548822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.768871 env[1475]: time="2025-09-13T01:33:42.768562667Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:33:42.768871 env[1475]: time="2025-09-13T01:33:42.768639290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769027689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769062299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769075143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769088147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769099711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769110714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769122038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.769176 env[1475]: time="2025-09-13T01:33:42.769136722Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771298383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771329912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771345357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771358521Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771374686Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771388130Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771405975Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:33:42.772753 env[1475]: time="2025-09-13T01:33:42.771441106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:33:42.772955 env[1475]: time="2025-09-13T01:33:42.771664294Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:33:42.772955 env[1475]: time="2025-09-13T01:33:42.771718471Z" level=info msg="Connect containerd service" Sep 13 01:33:42.772955 env[1475]: time="2025-09-13T01:33:42.771750881Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:33:42.772955 env[1475]: time="2025-09-13T01:33:42.772309011Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:33:42.795950 env[1475]: time="2025-09-13T01:33:42.773203605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:33:42.795950 env[1475]: time="2025-09-13T01:33:42.773273626Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:33:42.795950 env[1475]: time="2025-09-13T01:33:42.773321761Z" level=info msg="containerd successfully booted in 0.153353s" Sep 13 01:33:42.795950 env[1475]: time="2025-09-13T01:33:42.781703322Z" level=info msg="Start subscribing containerd event" Sep 13 01:33:42.795950 env[1475]: time="2025-09-13T01:33:42.781778025Z" level=info msg="Start recovering state" Sep 13 01:33:42.796061 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:33:42.773403 systemd[1]: Started containerd.service. Sep 13 01:33:42.788124 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:33:42.796649 env[1475]: time="2025-09-13T01:33:42.796621720Z" level=info msg="Start event monitor" Sep 13 01:33:42.796756 env[1475]: time="2025-09-13T01:33:42.796740757Z" level=info msg="Start snapshots syncer" Sep 13 01:33:42.796848 env[1475]: time="2025-09-13T01:33:42.796822222Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:33:42.797882 env[1475]: time="2025-09-13T01:33:42.797843294Z" level=info msg="Start streaming server" Sep 13 01:33:42.863932 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 01:33:43.243622 dbus-daemon[1452]: [system] SELinux support is enabled Sep 13 01:33:43.243784 systemd[1]: Started dbus.service. Sep 13 01:33:43.249509 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:33:43.249539 systemd[1]: Reached target system-config.target. Sep 13 01:33:43.257209 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:33:43.257231 systemd[1]: Reached target user-config.target. Sep 13 01:33:43.264709 dbus-daemon[1452]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 01:33:43.265137 systemd[1]: Started systemd-logind.service. Sep 13 01:33:43.315346 update_engine[1464]: I0913 01:33:43.292912 1464 main.cc:92] Flatcar Update Engine starting Sep 13 01:33:43.350169 tar[1468]: linux-arm64/LICENSE Sep 13 01:33:43.350169 tar[1468]: linux-arm64/README.md Sep 13 01:33:43.354874 systemd[1]: Finished prepare-helm.service. Sep 13 01:33:43.387589 systemd[1]: Started update-engine.service. Sep 13 01:33:43.387874 update_engine[1464]: I0913 01:33:43.387634 1464 update_check_scheduler.cc:74] Next update check in 2m57s Sep 13 01:33:43.393853 systemd[1]: Started locksmithd.service. Sep 13 01:33:43.527838 systemd[1]: Started kubelet.service. Sep 13 01:33:43.979800 kubelet[1560]: E0913 01:33:43.979733 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:43.981621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:43.981749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:44.655731 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:33:44.672758 systemd[1]: Finished sshd-keygen.service. Sep 13 01:33:44.678635 systemd[1]: Starting issuegen.service... Sep 13 01:33:44.683569 systemd[1]: Started waagent.service. Sep 13 01:33:44.687934 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:33:44.688098 systemd[1]: Finished issuegen.service. Sep 13 01:33:44.693772 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:33:44.748185 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:33:44.754806 systemd[1]: Started getty@tty1.service. Sep 13 01:33:44.759938 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 01:33:44.764780 systemd[1]: Reached target getty.target. Sep 13 01:33:44.768891 systemd[1]: Reached target multi-user.target. Sep 13 01:33:44.775768 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:33:44.786886 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:33:44.787048 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:33:44.792778 systemd[1]: Startup finished in 740ms (kernel) + 16.475s (initrd) + 34.001s (userspace) = 51.216s. Sep 13 01:33:44.845537 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:33:45.839345 login[1584]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 13 01:33:45.870625 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:46.045931 systemd[1]: Created slice user-500.slice. Sep 13 01:33:46.047055 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:33:46.049423 systemd-logind[1463]: New session 2 of user core. Sep 13 01:33:46.107278 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:33:46.108720 systemd[1]: Starting user@500.service... Sep 13 01:33:46.172525 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:46.640869 systemd[1587]: Queued start job for default target default.target. Sep 13 01:33:46.641824 systemd[1587]: Reached target paths.target. Sep 13 01:33:46.641855 systemd[1587]: Reached target sockets.target. Sep 13 01:33:46.641869 systemd[1587]: Reached target timers.target. Sep 13 01:33:46.641880 systemd[1587]: Reached target basic.target. Sep 13 01:33:46.641928 systemd[1587]: Reached target default.target. Sep 13 01:33:46.641951 systemd[1587]: Startup finished in 463ms. Sep 13 01:33:46.642041 systemd[1]: Started user@500.service. Sep 13 01:33:46.642935 systemd[1]: Started session-2.scope. Sep 13 01:33:46.840793 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:46.845234 systemd[1]: Started session-1.scope. Sep 13 01:33:46.845632 systemd-logind[1463]: New session 1 of user core. Sep 13 01:33:54.191576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:33:54.191743 systemd[1]: Stopped kubelet.service. Sep 13 01:33:54.193038 systemd[1]: Starting kubelet.service... Sep 13 01:33:54.684007 systemd[1]: Started kubelet.service. Sep 13 01:33:54.721754 kubelet[1613]: E0913 01:33:54.721702 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:54.724389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:54.724510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:55.146232 waagent[1581]: 2025-09-13T01:33:55.146129Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 01:33:55.181694 waagent[1581]: 2025-09-13T01:33:55.181607Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 01:33:55.186670 waagent[1581]: 2025-09-13T01:33:55.186608Z INFO Daemon Daemon Python: 3.9.16 Sep 13 01:33:55.191656 waagent[1581]: 2025-09-13T01:33:55.191557Z INFO Daemon Daemon Run daemon Sep 13 01:33:55.196540 waagent[1581]: 2025-09-13T01:33:55.196469Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 01:33:55.229115 waagent[1581]: 2025-09-13T01:33:55.228968Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:55.245594 waagent[1581]: 2025-09-13T01:33:55.245459Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:55.255864 waagent[1581]: 2025-09-13T01:33:55.255788Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:55.261163 waagent[1581]: 2025-09-13T01:33:55.261095Z INFO Daemon Daemon Using waagent for provisioning Sep 13 01:33:55.267066 waagent[1581]: 2025-09-13T01:33:55.267000Z INFO Daemon Daemon Activate resource disk Sep 13 01:33:55.272334 waagent[1581]: 2025-09-13T01:33:55.272269Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 01:33:55.287326 waagent[1581]: 2025-09-13T01:33:55.287230Z INFO Daemon Daemon Found device: None Sep 13 01:33:55.291946 waagent[1581]: 2025-09-13T01:33:55.291878Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 01:33:55.300699 waagent[1581]: 2025-09-13T01:33:55.300635Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 01:33:55.313096 waagent[1581]: 2025-09-13T01:33:55.313030Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:55.319448 waagent[1581]: 2025-09-13T01:33:55.319387Z INFO Daemon Daemon Running default provisioning handler Sep 13 01:33:55.333395 waagent[1581]: 2025-09-13T01:33:55.333269Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:55.349197 waagent[1581]: 2025-09-13T01:33:55.349057Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:55.359609 waagent[1581]: 2025-09-13T01:33:55.359518Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:55.365086 waagent[1581]: 2025-09-13T01:33:55.365010Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 01:33:55.536987 waagent[1581]: 2025-09-13T01:33:55.534676Z INFO Daemon Daemon Successfully mounted dvd Sep 13 01:33:55.675789 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 01:33:55.723799 waagent[1581]: 2025-09-13T01:33:55.723655Z INFO Daemon Daemon Detect protocol endpoint Sep 13 01:33:55.728987 waagent[1581]: 2025-09-13T01:33:55.728908Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:55.734950 waagent[1581]: 2025-09-13T01:33:55.734876Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 01:33:55.741708 waagent[1581]: 2025-09-13T01:33:55.741639Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 01:33:55.747428 waagent[1581]: 2025-09-13T01:33:55.747366Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 01:33:55.752586 waagent[1581]: 2025-09-13T01:33:55.752525Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 01:33:55.928426 waagent[1581]: 2025-09-13T01:33:55.928312Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 01:33:55.935222 waagent[1581]: 2025-09-13T01:33:55.935177Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 01:33:55.940555 waagent[1581]: 2025-09-13T01:33:55.940496Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 01:33:56.388645 waagent[1581]: 2025-09-13T01:33:56.388490Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 01:33:56.403427 waagent[1581]: 2025-09-13T01:33:56.403355Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 01:33:56.409317 waagent[1581]: 2025-09-13T01:33:56.409258Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 01:33:56.490659 waagent[1581]: 2025-09-13T01:33:56.490527Z INFO Daemon Daemon Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:56.499739 waagent[1581]: 2025-09-13T01:33:56.499664Z INFO Daemon Daemon Fetch goal state completed Sep 13 01:33:56.550973 waagent[1581]: 2025-09-13T01:33:56.550906Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 19ab2e51-197a-43c8-b0a8-07424698ac54 New eTag: 12480230721056194298] Sep 13 01:33:56.561617 waagent[1581]: 2025-09-13T01:33:56.561535Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:56.577035 waagent[1581]: 2025-09-13T01:33:56.576971Z INFO Daemon Daemon Starting provisioning Sep 13 01:33:56.582142 waagent[1581]: 2025-09-13T01:33:56.582075Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 01:33:56.586834 waagent[1581]: 2025-09-13T01:33:56.586776Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-49eff79a60] Sep 13 01:33:56.640416 waagent[1581]: 2025-09-13T01:33:56.640281Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-49eff79a60] Sep 13 01:33:56.646857 waagent[1581]: 2025-09-13T01:33:56.646784Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 01:33:56.653273 waagent[1581]: 2025-09-13T01:33:56.653204Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 01:33:56.669272 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 01:33:56.669450 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 01:33:56.669504 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 01:33:56.669737 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:56.676294 systemd-networkd[1247]: eth0: DHCPv6 lease lost Sep 13 01:33:56.678132 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:56.678327 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:56.680355 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:56.708676 systemd-networkd[1638]: enP44304s1: Link UP Sep 13 01:33:56.708687 systemd-networkd[1638]: enP44304s1: Gained carrier Sep 13 01:33:56.709719 systemd-networkd[1638]: eth0: Link UP Sep 13 01:33:56.709730 systemd-networkd[1638]: eth0: Gained carrier Sep 13 01:33:56.710077 systemd-networkd[1638]: lo: Link UP Sep 13 01:33:56.710087 systemd-networkd[1638]: lo: Gained carrier Sep 13 01:33:56.710395 systemd-networkd[1638]: eth0: Gained IPv6LL Sep 13 01:33:56.711558 systemd-networkd[1638]: Enumeration completed Sep 13 01:33:56.711657 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:56.713318 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:56.713517 systemd-networkd[1638]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:56.717587 waagent[1581]: 2025-09-13T01:33:56.717274Z INFO Daemon Daemon Create user account if not exists Sep 13 01:33:56.724826 waagent[1581]: 2025-09-13T01:33:56.724741Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 01:33:56.730725 waagent[1581]: 2025-09-13T01:33:56.730649Z INFO Daemon Daemon Configure sudoer Sep 13 01:33:56.741329 systemd-networkd[1638]: eth0: DHCPv4 address 10.200.20.15/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:56.745212 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:56.755642 waagent[1581]: 2025-09-13T01:33:56.755545Z INFO Daemon Daemon Configure sshd Sep 13 01:33:56.760063 waagent[1581]: 2025-09-13T01:33:56.760000Z INFO Daemon Daemon Deploy ssh public key. Sep 13 01:33:57.969388 waagent[1581]: 2025-09-13T01:33:57.969314Z INFO Daemon Daemon Provisioning complete Sep 13 01:33:57.989645 waagent[1581]: 2025-09-13T01:33:57.989579Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 01:33:57.995827 waagent[1581]: 2025-09-13T01:33:57.995763Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 01:33:58.006164 waagent[1581]: 2025-09-13T01:33:58.006102Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 01:33:58.302165 waagent[1644]: 2025-09-13T01:33:58.302019Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 01:33:58.303261 waagent[1644]: 2025-09-13T01:33:58.303194Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:58.303505 waagent[1644]: 2025-09-13T01:33:58.303456Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:58.315869 waagent[1644]: 2025-09-13T01:33:58.315799Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 01:33:58.316147 waagent[1644]: 2025-09-13T01:33:58.316098Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 01:33:58.372476 waagent[1644]: 2025-09-13T01:33:58.372346Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:58.372913 waagent[1644]: 2025-09-13T01:33:58.372862Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 01:33:58.386864 waagent[1644]: 2025-09-13T01:33:58.386811Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 8416152e-cfdb-4e09-81ad-347d3a1024bc New eTag: 12480230721056194298] Sep 13 01:33:58.387604 waagent[1644]: 2025-09-13T01:33:58.387546Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:58.516682 waagent[1644]: 2025-09-13T01:33:58.516540Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:33:58.539632 waagent[1644]: 2025-09-13T01:33:58.539546Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1644 Sep 13 01:33:58.543484 waagent[1644]: 2025-09-13T01:33:58.543423Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:33:58.544843 waagent[1644]: 2025-09-13T01:33:58.544786Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 01:33:58.707675 waagent[1644]: 2025-09-13T01:33:58.707617Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:33:58.708267 waagent[1644]: 2025-09-13T01:33:58.708198Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:33:58.716316 waagent[1644]: 2025-09-13T01:33:58.716257Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:33:58.716904 waagent[1644]: 2025-09-13T01:33:58.716849Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:33:58.718152 waagent[1644]: 2025-09-13T01:33:58.718090Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 01:33:58.719614 waagent[1644]: 2025-09-13T01:33:58.719547Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:33:58.719901 waagent[1644]: 2025-09-13T01:33:58.719831Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:58.720471 waagent[1644]: 2025-09-13T01:33:58.720400Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:58.721058 waagent[1644]: 2025-09-13T01:33:58.720992Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:33:58.721403 waagent[1644]: 2025-09-13T01:33:58.721341Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:33:58.721403 waagent[1644]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:33:58.721403 waagent[1644]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:33:58.721403 waagent[1644]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:33:58.721403 waagent[1644]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:58.721403 waagent[1644]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:58.721403 waagent[1644]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:58.723492 waagent[1644]: 2025-09-13T01:33:58.723331Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:33:58.724039 waagent[1644]: 2025-09-13T01:33:58.723959Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:58.724558 waagent[1644]: 2025-09-13T01:33:58.724484Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:58.725150 waagent[1644]: 2025-09-13T01:33:58.725080Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:33:58.725347 waagent[1644]: 2025-09-13T01:33:58.725290Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:33:58.725544 waagent[1644]: 2025-09-13T01:33:58.725479Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:33:58.725727 waagent[1644]: 2025-09-13T01:33:58.725664Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:33:58.725917 waagent[1644]: 2025-09-13T01:33:58.725858Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:33:58.727176 waagent[1644]: 2025-09-13T01:33:58.727028Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:33:58.727384 waagent[1644]: 2025-09-13T01:33:58.727313Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:33:58.729330 waagent[1644]: 2025-09-13T01:33:58.729234Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:33:58.737434 waagent[1644]: 2025-09-13T01:33:58.737367Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 01:33:58.739575 waagent[1644]: 2025-09-13T01:33:58.739514Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:33:58.741486 waagent[1644]: 2025-09-13T01:33:58.741430Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 01:33:58.782628 waagent[1644]: 2025-09-13T01:33:58.782566Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 01:33:58.835429 waagent[1644]: 2025-09-13T01:33:58.835300Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1638' Sep 13 01:33:58.949439 waagent[1644]: 2025-09-13T01:33:58.949309Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:33:58.949439 waagent[1644]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:33:58.949439 waagent[1644]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:33:58.949439 waagent[1644]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:6d:d2 brd ff:ff:ff:ff:ff:ff Sep 13 01:33:58.949439 waagent[1644]: 3: enP44304s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:6d:d2 brd ff:ff:ff:ff:ff:ff\ altname enP44304p0s2 Sep 13 01:33:58.949439 waagent[1644]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:33:58.949439 waagent[1644]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:33:58.949439 waagent[1644]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:33:58.949439 waagent[1644]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:33:58.949439 waagent[1644]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:33:58.949439 waagent[1644]: 2: eth0 inet6 fe80::222:48ff:fec1:6dd2/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:33:59.200199 waagent[1644]: 2025-09-13T01:33:59.200130Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 01:34:00.010646 waagent[1581]: 2025-09-13T01:34:00.010529Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 01:34:00.015990 waagent[1581]: 2025-09-13T01:34:00.015936Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 01:34:01.298215 waagent[1672]: 2025-09-13T01:34:01.298124Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 01:34:01.299240 waagent[1672]: 2025-09-13T01:34:01.299184Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 01:34:01.299498 waagent[1672]: 2025-09-13T01:34:01.299450Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 01:34:01.299728 waagent[1672]: 2025-09-13T01:34:01.299683Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 13 01:34:01.312821 waagent[1672]: 2025-09-13T01:34:01.312728Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:34:01.313379 waagent[1672]: 2025-09-13T01:34:01.313324Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:34:01.313636 waagent[1672]: 2025-09-13T01:34:01.313589Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:34:01.313959 waagent[1672]: 2025-09-13T01:34:01.313907Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 01:34:01.327219 waagent[1672]: 2025-09-13T01:34:01.327156Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 01:34:01.339123 waagent[1672]: 2025-09-13T01:34:01.339068Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 01:34:01.340332 waagent[1672]: 2025-09-13T01:34:01.340276Z INFO ExtHandler Sep 13 01:34:01.340599 waagent[1672]: 2025-09-13T01:34:01.340548Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: af6caba3-0562-40f8-b615-228419c3ab60 eTag: 12480230721056194298 source: Fabric] Sep 13 01:34:01.341476 waagent[1672]: 2025-09-13T01:34:01.341420Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 01:34:01.342833 waagent[1672]: 2025-09-13T01:34:01.342776Z INFO ExtHandler Sep 13 01:34:01.343078 waagent[1672]: 2025-09-13T01:34:01.343030Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 01:34:01.350176 waagent[1672]: 2025-09-13T01:34:01.350130Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 01:34:01.350821 waagent[1672]: 2025-09-13T01:34:01.350774Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:34:01.370454 waagent[1672]: 2025-09-13T01:34:01.370392Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 01:34:01.432500 waagent[1672]: 2025-09-13T01:34:01.432385Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08', 'hasPrivateKey': True} Sep 13 01:34:01.434008 waagent[1672]: 2025-09-13T01:34:01.433947Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 01:34:01.435038 waagent[1672]: 2025-09-13T01:34:01.434979Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 01:34:01.454531 waagent[1672]: 2025-09-13T01:34:01.454432Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 01:34:01.462094 waagent[1672]: 2025-09-13T01:34:01.462006Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:34:01.465605 waagent[1672]: 2025-09-13T01:34:01.465515Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 01:34:01.465934 waagent[1672]: 2025-09-13T01:34:01.465883Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 01:34:01.770511 waagent[1672]: 2025-09-13T01:34:01.770385Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Sep 13 01:34:01.770511 waagent[1672]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:34:01.770511 waagent[1672]: pkts bytes target prot opt in out source destination Sep 13 01:34:01.770511 waagent[1672]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:34:01.770511 waagent[1672]: pkts bytes target prot opt in out source destination Sep 13 01:34:01.770511 waagent[1672]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:34:01.770511 waagent[1672]: pkts bytes target prot opt in out source destination Sep 13 01:34:01.770511 waagent[1672]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 01:34:01.770511 waagent[1672]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:34:01.770511 waagent[1672]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 01:34:01.772035 waagent[1672]: 2025-09-13T01:34:01.771974Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 01:34:01.774964 waagent[1672]: 2025-09-13T01:34:01.774856Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 01:34:01.775381 waagent[1672]: 2025-09-13T01:34:01.775327Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:34:01.775916 waagent[1672]: 2025-09-13T01:34:01.775861Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:34:01.783989 waagent[1672]: 2025-09-13T01:34:01.783932Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:34:01.784716 waagent[1672]: 2025-09-13T01:34:01.784661Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:34:01.793242 waagent[1672]: 2025-09-13T01:34:01.793175Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1672 Sep 13 01:34:01.796892 waagent[1672]: 2025-09-13T01:34:01.796817Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:34:01.797943 waagent[1672]: 2025-09-13T01:34:01.797886Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 01:34:01.798998 waagent[1672]: 2025-09-13T01:34:01.798943Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 01:34:01.801904 waagent[1672]: 2025-09-13T01:34:01.801847Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 01:34:01.802399 waagent[1672]: 2025-09-13T01:34:01.802343Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 01:34:01.803925 waagent[1672]: 2025-09-13T01:34:01.803857Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:34:01.804216 waagent[1672]: 2025-09-13T01:34:01.804151Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:34:01.804826 waagent[1672]: 2025-09-13T01:34:01.804761Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:34:01.805462 waagent[1672]: 2025-09-13T01:34:01.805396Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:34:01.805929 waagent[1672]: 2025-09-13T01:34:01.805863Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:34:01.806892 waagent[1672]: 2025-09-13T01:34:01.806836Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:34:01.806892 waagent[1672]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:34:01.806892 waagent[1672]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:34:01.806892 waagent[1672]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:34:01.806892 waagent[1672]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:34:01.806892 waagent[1672]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:34:01.806892 waagent[1672]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:34:01.806892 waagent[1672]: 2025-09-13T01:34:01.806706Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:34:01.807360 waagent[1672]: 2025-09-13T01:34:01.807302Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:34:01.807561 waagent[1672]: 2025-09-13T01:34:01.807500Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:34:01.810272 waagent[1672]: 2025-09-13T01:34:01.810135Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:34:01.811415 waagent[1672]: 2025-09-13T01:34:01.811355Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:34:01.811678 waagent[1672]: 2025-09-13T01:34:01.811631Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:34:01.811893 waagent[1672]: 2025-09-13T01:34:01.811849Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:34:01.815417 waagent[1672]: 2025-09-13T01:34:01.815336Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:34:01.816595 waagent[1672]: 2025-09-13T01:34:01.814699Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:34:01.817100 waagent[1672]: 2025-09-13T01:34:01.817030Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:34:01.834630 waagent[1672]: 2025-09-13T01:34:01.834545Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:34:01.834630 waagent[1672]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:34:01.834630 waagent[1672]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:34:01.834630 waagent[1672]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:6d:d2 brd ff:ff:ff:ff:ff:ff Sep 13 01:34:01.834630 waagent[1672]: 3: enP44304s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c1:6d:d2 brd ff:ff:ff:ff:ff:ff\ altname enP44304p0s2 Sep 13 01:34:01.834630 waagent[1672]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:34:01.834630 waagent[1672]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:34:01.834630 waagent[1672]: 2: eth0 inet 10.200.20.15/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:34:01.834630 waagent[1672]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:34:01.834630 waagent[1672]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:34:01.834630 waagent[1672]: 2: eth0 inet6 fe80::222:48ff:fec1:6dd2/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:34:01.844510 waagent[1672]: 2025-09-13T01:34:01.844421Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 01:34:01.860124 waagent[1672]: 2025-09-13T01:34:01.860032Z INFO ExtHandler ExtHandler Sep 13 01:34:01.861189 waagent[1672]: 2025-09-13T01:34:01.861126Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c4e64b7d-2205-42db-a108-b3e735f7099d correlation 3d36d5a0-c756-42a4-a874-b2fabb0cc724 created: 2025-09-13T01:32:05.978531Z] Sep 13 01:34:01.864541 waagent[1672]: 2025-09-13T01:34:01.864475Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 01:34:01.871048 waagent[1672]: 2025-09-13T01:34:01.870987Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Sep 13 01:34:01.892591 waagent[1672]: 2025-09-13T01:34:01.892508Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:34:01.899171 waagent[1672]: 2025-09-13T01:34:01.898887Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 01:34:01.904242 waagent[1672]: 2025-09-13T01:34:01.904150Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 467D09CF-35E3-49F4-B9D8-745A012936A0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 01:34:01.906589 waagent[1672]: 2025-09-13T01:34:01.906538Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 01:34:04.941563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:34:04.941739 systemd[1]: Stopped kubelet.service. Sep 13 01:34:04.943079 systemd[1]: Starting kubelet.service... Sep 13 01:34:05.264119 systemd[1]: Started kubelet.service. Sep 13 01:34:05.302504 kubelet[1717]: E0913 01:34:05.302449 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:05.304545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:05.304664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:15.441589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:34:15.441769 systemd[1]: Stopped kubelet.service. Sep 13 01:34:15.443141 systemd[1]: Starting kubelet.service... Sep 13 01:34:15.764428 systemd[1]: Started kubelet.service. Sep 13 01:34:15.800547 kubelet[1726]: E0913 01:34:15.800494 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:15.802604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:15.802721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:16.913623 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 13 01:34:22.970638 systemd[1]: Created slice system-sshd.slice. Sep 13 01:34:22.973364 systemd[1]: Started sshd@0-10.200.20.15:22-10.200.16.10:57318.service. Sep 13 01:34:23.650603 sshd[1732]: Accepted publickey for core from 10.200.16.10 port 57318 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:23.669400 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:23.673727 systemd[1]: Started session-3.scope. Sep 13 01:34:23.674056 systemd-logind[1463]: New session 3 of user core. Sep 13 01:34:24.044888 systemd[1]: Started sshd@1-10.200.20.15:22-10.200.16.10:57326.service. Sep 13 01:34:24.465751 sshd[1737]: Accepted publickey for core from 10.200.16.10 port 57326 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:24.467338 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:24.471002 systemd-logind[1463]: New session 4 of user core. Sep 13 01:34:24.471471 systemd[1]: Started session-4.scope. Sep 13 01:34:24.800640 sshd[1737]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:24.803411 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:34:24.803638 systemd[1]: sshd@1-10.200.20.15:22-10.200.16.10:57326.service: Deactivated successfully. Sep 13 01:34:24.804423 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:34:24.805084 systemd-logind[1463]: Removed session 4. Sep 13 01:34:24.872714 systemd[1]: Started sshd@2-10.200.20.15:22-10.200.16.10:57340.service. Sep 13 01:34:25.294534 sshd[1743]: Accepted publickey for core from 10.200.16.10 port 57340 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:25.296059 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:25.300160 systemd[1]: Started session-5.scope. Sep 13 01:34:25.300786 systemd-logind[1463]: New session 5 of user core. Sep 13 01:34:25.606923 sshd[1743]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:25.609305 systemd[1]: sshd@2-10.200.20.15:22-10.200.16.10:57340.service: Deactivated successfully. Sep 13 01:34:25.609960 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:34:25.610539 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:34:25.611291 systemd-logind[1463]: Removed session 5. Sep 13 01:34:25.689987 systemd[1]: Started sshd@3-10.200.20.15:22-10.200.16.10:57344.service. Sep 13 01:34:25.941737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:34:25.941920 systemd[1]: Stopped kubelet.service. Sep 13 01:34:25.943280 systemd[1]: Starting kubelet.service... Sep 13 01:34:26.032891 systemd[1]: Started kubelet.service. Sep 13 01:34:26.068182 kubelet[1755]: E0913 01:34:26.068128 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:26.070145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:26.070299 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:26.115079 sshd[1749]: Accepted publickey for core from 10.200.16.10 port 57344 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:26.115987 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:26.120173 systemd[1]: Started session-6.scope. Sep 13 01:34:26.120533 systemd-logind[1463]: New session 6 of user core. Sep 13 01:34:26.450428 sshd[1749]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:26.452683 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:34:26.453207 systemd[1]: sshd@3-10.200.20.15:22-10.200.16.10:57344.service: Deactivated successfully. Sep 13 01:34:26.454338 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:34:26.455049 systemd-logind[1463]: Removed session 6. Sep 13 01:34:26.520398 systemd[1]: Started sshd@4-10.200.20.15:22-10.200.16.10:57346.service. Sep 13 01:34:26.940227 sshd[1764]: Accepted publickey for core from 10.200.16.10 port 57346 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:34:26.941755 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:34:26.945898 systemd[1]: Started session-7.scope. Sep 13 01:34:26.947096 systemd-logind[1463]: New session 7 of user core. Sep 13 01:34:27.667552 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:34:27.667780 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:34:27.702952 systemd[1]: Starting docker.service... Sep 13 01:34:27.763568 env[1777]: time="2025-09-13T01:34:27.763525316Z" level=info msg="Starting up" Sep 13 01:34:27.772314 env[1777]: time="2025-09-13T01:34:27.772278423Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:34:27.772314 env[1777]: time="2025-09-13T01:34:27.772306023Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:34:27.772444 env[1777]: time="2025-09-13T01:34:27.772332144Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:34:27.772444 env[1777]: time="2025-09-13T01:34:27.772342784Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:34:27.773996 env[1777]: time="2025-09-13T01:34:27.773973411Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:34:27.774106 env[1777]: time="2025-09-13T01:34:27.774092773Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:34:27.774166 env[1777]: time="2025-09-13T01:34:27.774153174Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:34:27.774216 env[1777]: time="2025-09-13T01:34:27.774204175Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:34:27.863385 env[1777]: time="2025-09-13T01:34:27.863342348Z" level=info msg="Loading containers: start." Sep 13 01:34:28.116273 kernel: Initializing XFRM netlink socket Sep 13 01:34:28.157129 env[1777]: time="2025-09-13T01:34:28.157085024Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:34:28.372898 systemd-networkd[1638]: docker0: Link UP Sep 13 01:34:28.397483 env[1777]: time="2025-09-13T01:34:28.397453997Z" level=info msg="Loading containers: done." Sep 13 01:34:28.406628 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3327318572-merged.mount: Deactivated successfully. Sep 13 01:34:28.422584 env[1777]: time="2025-09-13T01:34:28.422546711Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:34:28.422748 env[1777]: time="2025-09-13T01:34:28.422727714Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:34:28.422846 env[1777]: time="2025-09-13T01:34:28.422828476Z" level=info msg="Daemon has completed initialization" Sep 13 01:34:28.460200 systemd[1]: Started docker.service. Sep 13 01:34:28.466418 env[1777]: time="2025-09-13T01:34:28.466365759Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:34:28.578006 update_engine[1464]: I0913 01:34:28.577636 1464 update_attempter.cc:509] Updating boot flags... Sep 13 01:34:32.493814 env[1475]: time="2025-09-13T01:34:32.493547594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 01:34:33.369265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977730292.mount: Deactivated successfully. Sep 13 01:34:34.875036 env[1475]: time="2025-09-13T01:34:34.874972241Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.881625 env[1475]: time="2025-09-13T01:34:34.881578351Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.885959 env[1475]: time="2025-09-13T01:34:34.885921797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.890470 env[1475]: time="2025-09-13T01:34:34.890443606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:34.892229 env[1475]: time="2025-09-13T01:34:34.892199944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 01:34:34.893803 env[1475]: time="2025-09-13T01:34:34.893769041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 01:34:36.191550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 01:34:36.191720 systemd[1]: Stopped kubelet.service. Sep 13 01:34:36.193083 systemd[1]: Starting kubelet.service... Sep 13 01:34:36.308367 systemd[1]: Started kubelet.service. Sep 13 01:34:36.415049 kubelet[1934]: E0913 01:34:36.414971 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:36.417121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:36.417260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:36.773285 env[1475]: time="2025-09-13T01:34:36.772789848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.782620 env[1475]: time="2025-09-13T01:34:36.782549299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.788526 env[1475]: time="2025-09-13T01:34:36.788474555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.793381 env[1475]: time="2025-09-13T01:34:36.793344881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:36.794359 env[1475]: time="2025-09-13T01:34:36.794330690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 01:34:36.795685 env[1475]: time="2025-09-13T01:34:36.795644302Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 01:34:37.979846 env[1475]: time="2025-09-13T01:34:37.979791223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.988850 env[1475]: time="2025-09-13T01:34:37.988814382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.993661 env[1475]: time="2025-09-13T01:34:37.993628584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.998898 env[1475]: time="2025-09-13T01:34:37.998853230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:37.999238 env[1475]: time="2025-09-13T01:34:37.999208193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 01:34:38.001986 env[1475]: time="2025-09-13T01:34:38.001950537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 01:34:40.090655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806009261.mount: Deactivated successfully. Sep 13 01:34:40.582604 env[1475]: time="2025-09-13T01:34:40.582546051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.589531 env[1475]: time="2025-09-13T01:34:40.589470223Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.594104 env[1475]: time="2025-09-13T01:34:40.594065191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.600948 env[1475]: time="2025-09-13T01:34:40.600916323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:40.601303 env[1475]: time="2025-09-13T01:34:40.601268683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 01:34:40.602076 env[1475]: time="2025-09-13T01:34:40.602043765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:34:41.290092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64511845.mount: Deactivated successfully. Sep 13 01:34:42.601299 env[1475]: time="2025-09-13T01:34:42.601219568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.608372 env[1475]: time="2025-09-13T01:34:42.608316940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.612878 env[1475]: time="2025-09-13T01:34:42.612841147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.616981 env[1475]: time="2025-09-13T01:34:42.616946274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:42.617871 env[1475]: time="2025-09-13T01:34:42.617841355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 01:34:42.618411 env[1475]: time="2025-09-13T01:34:42.618384316Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:34:43.188391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349912125.mount: Deactivated successfully. Sep 13 01:34:43.221028 env[1475]: time="2025-09-13T01:34:43.220957655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:43.228698 env[1475]: time="2025-09-13T01:34:43.228659547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:43.233333 env[1475]: time="2025-09-13T01:34:43.233297475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:43.237528 env[1475]: time="2025-09-13T01:34:43.237491001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:43.238163 env[1475]: time="2025-09-13T01:34:43.238133962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 01:34:43.238701 env[1475]: time="2025-09-13T01:34:43.238677923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 01:34:44.241209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486401053.mount: Deactivated successfully. Sep 13 01:34:46.433816 env[1475]: time="2025-09-13T01:34:46.433772514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.441646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 01:34:46.441824 systemd[1]: Stopped kubelet.service. Sep 13 01:34:46.443142 systemd[1]: Starting kubelet.service... Sep 13 01:34:46.448964 env[1475]: time="2025-09-13T01:34:46.448930656Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.457993 env[1475]: time="2025-09-13T01:34:46.457956030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.531755 systemd[1]: Started kubelet.service. Sep 13 01:34:46.566543 kubelet[1943]: E0913 01:34:46.566483 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:46.568632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:46.568752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:46.756911 env[1475]: time="2025-09-13T01:34:46.756378708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:46.757063 env[1475]: time="2025-09-13T01:34:46.756925348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 01:34:53.055920 systemd[1]: Stopped kubelet.service. Sep 13 01:34:53.059364 systemd[1]: Starting kubelet.service... Sep 13 01:34:53.087568 systemd[1]: Reloading. Sep 13 01:34:53.155518 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2025-09-13T01:34:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:34:53.158328 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2025-09-13T01:34:53Z" level=info msg="torcx already run" Sep 13 01:34:53.257934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:34:53.257957 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:34:53.275769 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:34:53.370657 systemd[1]: Started kubelet.service. Sep 13 01:34:53.374689 systemd[1]: Stopping kubelet.service... Sep 13 01:34:53.375524 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:34:53.375724 systemd[1]: Stopped kubelet.service. Sep 13 01:34:53.377206 systemd[1]: Starting kubelet.service... Sep 13 01:34:53.587329 systemd[1]: Started kubelet.service. Sep 13 01:34:53.725329 kubelet[2062]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:53.725329 kubelet[2062]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:34:53.725329 kubelet[2062]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:53.725710 kubelet[2062]: I0913 01:34:53.725415 2062 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:34:54.482451 kubelet[2062]: I0913 01:34:54.482408 2062 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:34:54.482451 kubelet[2062]: I0913 01:34:54.482443 2062 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:34:54.483195 kubelet[2062]: I0913 01:34:54.483167 2062 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:34:54.813665 kubelet[2062]: E0913 01:34:54.813553 2062 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:54.814818 kubelet[2062]: I0913 01:34:54.814798 2062 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:34:54.820204 kubelet[2062]: E0913 01:34:54.820127 2062 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:34:54.820359 kubelet[2062]: I0913 01:34:54.820343 2062 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:34:54.824371 kubelet[2062]: I0913 01:34:54.824352 2062 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:34:54.825202 kubelet[2062]: I0913 01:34:54.825184 2062 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:34:54.825477 kubelet[2062]: I0913 01:34:54.825447 2062 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:34:54.825712 kubelet[2062]: I0913 01:34:54.825541 2062 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-49eff79a60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:34:54.825853 kubelet[2062]: I0913 01:34:54.825841 2062 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:34:54.825914 kubelet[2062]: I0913 01:34:54.825905 2062 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:34:54.826074 kubelet[2062]: I0913 01:34:54.826063 2062 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:54.838590 kubelet[2062]: W0913 01:34:54.838533 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-49eff79a60&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:54.838781 kubelet[2062]: E0913 01:34:54.838760 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-49eff79a60&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:54.839051 kubelet[2062]: I0913 01:34:54.839022 2062 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:34:54.839091 kubelet[2062]: I0913 01:34:54.839059 2062 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:34:54.839091 kubelet[2062]: I0913 01:34:54.839089 2062 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:34:54.839142 kubelet[2062]: I0913 01:34:54.839105 2062 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:34:54.844694 kubelet[2062]: W0913 01:34:54.844494 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:54.844694 kubelet[2062]: E0913 01:34:54.844596 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:54.844829 kubelet[2062]: I0913 01:34:54.844809 2062 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:34:54.845338 kubelet[2062]: I0913 01:34:54.845312 2062 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:34:54.845415 kubelet[2062]: W0913 01:34:54.845381 2062 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:34:54.854858 kubelet[2062]: I0913 01:34:54.854827 2062 server.go:1274] "Started kubelet" Sep 13 01:34:54.855731 kubelet[2062]: I0913 01:34:54.855694 2062 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:34:54.858821 kubelet[2062]: I0913 01:34:54.858748 2062 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:34:54.859139 kubelet[2062]: I0913 01:34:54.859105 2062 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:34:54.860127 kubelet[2062]: I0913 01:34:54.860108 2062 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:34:54.865558 kubelet[2062]: E0913 01:34:54.864334 2062 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-49eff79a60.1864b3a96dd42605 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-49eff79a60,UID:ci-3510.3.8-n-49eff79a60,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-49eff79a60,},FirstTimestamp:2025-09-13 01:34:54.854800901 +0000 UTC m=+1.262177506,LastTimestamp:2025-09-13 01:34:54.854800901 +0000 UTC m=+1.262177506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-49eff79a60,}" Sep 13 01:34:54.873815 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 01:34:54.874899 kubelet[2062]: I0913 01:34:54.874879 2062 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:34:54.876022 kubelet[2062]: I0913 01:34:54.876001 2062 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:34:54.878329 kubelet[2062]: E0913 01:34:54.878310 2062 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:34:54.878813 kubelet[2062]: E0913 01:34:54.878797 2062 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:54.879013 kubelet[2062]: I0913 01:34:54.879002 2062 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:34:54.879341 kubelet[2062]: I0913 01:34:54.879322 2062 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:34:54.879610 kubelet[2062]: I0913 01:34:54.879598 2062 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:34:54.880712 kubelet[2062]: E0913 01:34:54.880686 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-49eff79a60?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="200ms" Sep 13 01:34:54.880900 kubelet[2062]: W0913 01:34:54.880864 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:54.881000 kubelet[2062]: E0913 01:34:54.880982 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:54.881226 kubelet[2062]: I0913 01:34:54.881210 2062 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:34:54.881563 kubelet[2062]: I0913 01:34:54.881543 2062 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:34:54.883614 kubelet[2062]: I0913 01:34:54.883595 2062 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:34:54.940073 kubelet[2062]: I0913 01:34:54.940038 2062 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:34:54.940073 kubelet[2062]: I0913 01:34:54.940059 2062 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:34:54.940073 kubelet[2062]: I0913 01:34:54.940080 2062 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:54.979574 kubelet[2062]: E0913 01:34:54.979548 2062 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:55.080018 kubelet[2062]: E0913 01:34:55.079937 2062 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:55.081475 kubelet[2062]: E0913 01:34:55.081443 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-49eff79a60?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="400ms" Sep 13 01:34:55.180777 kubelet[2062]: E0913 01:34:55.180746 2062 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:55.269502 kubelet[2062]: I0913 01:34:55.269471 2062 policy_none.go:49] "None policy: Start" Sep 13 01:34:55.270165 kubelet[2062]: I0913 01:34:55.270145 2062 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:34:55.270240 kubelet[2062]: I0913 01:34:55.270173 2062 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:34:55.281827 kubelet[2062]: E0913 01:34:55.281799 2062 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:55.299022 kubelet[2062]: I0913 01:34:55.298880 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:34:55.303944 kubelet[2062]: I0913 01:34:55.300043 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:34:55.303944 kubelet[2062]: I0913 01:34:55.300069 2062 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:34:55.303944 kubelet[2062]: I0913 01:34:55.300087 2062 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:34:55.303944 kubelet[2062]: E0913 01:34:55.300129 2062 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:34:55.303944 kubelet[2062]: W0913 01:34:55.301283 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:55.303944 kubelet[2062]: E0913 01:34:55.301315 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:55.312560 systemd[1]: Created slice kubepods.slice. Sep 13 01:34:55.316825 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 01:34:55.319478 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 01:34:55.331142 kubelet[2062]: I0913 01:34:55.331069 2062 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:34:55.331632 kubelet[2062]: I0913 01:34:55.331610 2062 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:34:55.331711 kubelet[2062]: I0913 01:34:55.331630 2062 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:34:55.332811 kubelet[2062]: I0913 01:34:55.332437 2062 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:34:55.334069 kubelet[2062]: E0913 01:34:55.334042 2062 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:55.409152 systemd[1]: Created slice kubepods-burstable-podad4dab777cc595c3599f57cd791303f0.slice. Sep 13 01:34:55.423182 systemd[1]: Created slice kubepods-burstable-pod16532c94c5a9b6d11794c6a3a3dbc839.slice. Sep 13 01:34:55.433445 kubelet[2062]: I0913 01:34:55.433042 2062 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.433306 systemd[1]: Created slice kubepods-burstable-podcdd7292d9387f24ec8462afe5aba1676.slice. Sep 13 01:34:55.433666 kubelet[2062]: E0913 01:34:55.433474 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.482262 kubelet[2062]: E0913 01:34:55.482214 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-49eff79a60?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="800ms" Sep 13 01:34:55.483277 kubelet[2062]: I0913 01:34:55.483241 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483314 kubelet[2062]: I0913 01:34:55.483285 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483314 kubelet[2062]: I0913 01:34:55.483303 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16532c94c5a9b6d11794c6a3a3dbc839-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-49eff79a60\" (UID: \"16532c94c5a9b6d11794c6a3a3dbc839\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483382 kubelet[2062]: I0913 01:34:55.483319 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdd7292d9387f24ec8462afe5aba1676-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" (UID: \"cdd7292d9387f24ec8462afe5aba1676\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483382 kubelet[2062]: I0913 01:34:55.483334 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdd7292d9387f24ec8462afe5aba1676-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" (UID: \"cdd7292d9387f24ec8462afe5aba1676\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483382 kubelet[2062]: I0913 01:34:55.483348 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483382 kubelet[2062]: I0913 01:34:55.483362 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483382 kubelet[2062]: I0913 01:34:55.483379 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.483493 kubelet[2062]: I0913 01:34:55.483393 2062 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdd7292d9387f24ec8462afe5aba1676-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" (UID: \"cdd7292d9387f24ec8462afe5aba1676\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.635614 kubelet[2062]: I0913 01:34:55.635523 2062 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.636483 kubelet[2062]: E0913 01:34:55.636457 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:55.722548 env[1475]: time="2025-09-13T01:34:55.722503064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-49eff79a60,Uid:ad4dab777cc595c3599f57cd791303f0,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:55.732266 env[1475]: time="2025-09-13T01:34:55.732201835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-49eff79a60,Uid:16532c94c5a9b6d11794c6a3a3dbc839,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:55.737169 env[1475]: time="2025-09-13T01:34:55.736970520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-49eff79a60,Uid:cdd7292d9387f24ec8462afe5aba1676,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:56.038745 kubelet[2062]: I0913 01:34:56.038465 2062 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:56.039088 kubelet[2062]: E0913 01:34:56.038774 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:56.059425 kubelet[2062]: W0913 01:34:56.059369 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-49eff79a60&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:56.059510 kubelet[2062]: E0913 01:34:56.059438 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-49eff79a60&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.283124 kubelet[2062]: E0913 01:34:56.283075 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-49eff79a60?timeout=10s\": dial tcp 10.200.20.15:6443: connect: connection refused" interval="1.6s" Sep 13 01:34:56.302069 kubelet[2062]: W0913 01:34:56.301721 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:56.302069 kubelet[2062]: E0913 01:34:56.301799 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.392260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929618349.mount: Deactivated successfully. Sep 13 01:34:56.408092 kubelet[2062]: W0913 01:34:56.408056 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:56.408234 kubelet[2062]: E0913 01:34:56.408101 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.427554 env[1475]: time="2025-09-13T01:34:56.427508542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.442641 env[1475]: time="2025-09-13T01:34:56.442601199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.447135 env[1475]: time="2025-09-13T01:34:56.447098524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.456796 env[1475]: time="2025-09-13T01:34:56.456755335Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.469965 env[1475]: time="2025-09-13T01:34:56.469918350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.476605 env[1475]: time="2025-09-13T01:34:56.476564997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.482374 env[1475]: time="2025-09-13T01:34:56.482344804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.491210 env[1475]: time="2025-09-13T01:34:56.491163533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.496820 env[1475]: time="2025-09-13T01:34:56.496790620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.507605 env[1475]: time="2025-09-13T01:34:56.507568072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.523117 env[1475]: time="2025-09-13T01:34:56.523072889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.529273 env[1475]: time="2025-09-13T01:34:56.529221376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:56.589835 env[1475]: time="2025-09-13T01:34:56.585652959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:56.589835 env[1475]: time="2025-09-13T01:34:56.585691279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:56.589835 env[1475]: time="2025-09-13T01:34:56.585701239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:56.589835 env[1475]: time="2025-09-13T01:34:56.585877200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c1fb09fc2a58db66b824c63e45d46bd5aadda3b703f8c121236c750e03b7d7c pid=2101 runtime=io.containerd.runc.v2 Sep 13 01:34:56.604773 systemd[1]: Started cri-containerd-7c1fb09fc2a58db66b824c63e45d46bd5aadda3b703f8c121236c750e03b7d7c.scope. Sep 13 01:34:56.630543 env[1475]: time="2025-09-13T01:34:56.630263129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:56.630839 env[1475]: time="2025-09-13T01:34:56.630810890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:56.630939 env[1475]: time="2025-09-13T01:34:56.630919210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:56.631167 env[1475]: time="2025-09-13T01:34:56.631137690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb18f61169c6dc25916f6be6ad1e56a49f86aa13490d2f084878bbb223fccdba pid=2136 runtime=io.containerd.runc.v2 Sep 13 01:34:56.636365 env[1475]: time="2025-09-13T01:34:56.636192216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:56.636463 env[1475]: time="2025-09-13T01:34:56.636385696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:56.636463 env[1475]: time="2025-09-13T01:34:56.636413576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:56.639560 env[1475]: time="2025-09-13T01:34:56.637515138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7e835e6b6139a657451f2e4b46f48acfc6bfa76cac5020eada2e42cc8e8fca6 pid=2153 runtime=io.containerd.runc.v2 Sep 13 01:34:56.648742 systemd[1]: Started cri-containerd-c7e835e6b6139a657451f2e4b46f48acfc6bfa76cac5020eada2e42cc8e8fca6.scope. Sep 13 01:34:56.661446 systemd[1]: Started cri-containerd-bb18f61169c6dc25916f6be6ad1e56a49f86aa13490d2f084878bbb223fccdba.scope. Sep 13 01:34:56.669308 env[1475]: time="2025-09-13T01:34:56.669230253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-49eff79a60,Uid:ad4dab777cc595c3599f57cd791303f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c1fb09fc2a58db66b824c63e45d46bd5aadda3b703f8c121236c750e03b7d7c\"" Sep 13 01:34:56.675749 env[1475]: time="2025-09-13T01:34:56.674232139Z" level=info msg="CreateContainer within sandbox \"7c1fb09fc2a58db66b824c63e45d46bd5aadda3b703f8c121236c750e03b7d7c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:34:56.703486 env[1475]: time="2025-09-13T01:34:56.703428611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-49eff79a60,Uid:16532c94c5a9b6d11794c6a3a3dbc839,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb18f61169c6dc25916f6be6ad1e56a49f86aa13490d2f084878bbb223fccdba\"" Sep 13 01:34:56.705841 env[1475]: time="2025-09-13T01:34:56.704634613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-49eff79a60,Uid:cdd7292d9387f24ec8462afe5aba1676,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e835e6b6139a657451f2e4b46f48acfc6bfa76cac5020eada2e42cc8e8fca6\"" Sep 13 01:34:56.707287 env[1475]: time="2025-09-13T01:34:56.707237416Z" level=info msg="CreateContainer within sandbox \"bb18f61169c6dc25916f6be6ad1e56a49f86aa13490d2f084878bbb223fccdba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:34:56.708334 env[1475]: time="2025-09-13T01:34:56.708292497Z" level=info msg="CreateContainer within sandbox \"c7e835e6b6139a657451f2e4b46f48acfc6bfa76cac5020eada2e42cc8e8fca6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:34:56.721874 kubelet[2062]: W0913 01:34:56.721815 2062 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.15:6443: connect: connection refused Sep 13 01:34:56.722025 kubelet[2062]: E0913 01:34:56.721880 2062 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.758221 env[1475]: time="2025-09-13T01:34:56.758049953Z" level=info msg="CreateContainer within sandbox \"7c1fb09fc2a58db66b824c63e45d46bd5aadda3b703f8c121236c750e03b7d7c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"848dab0b0e2846e047ec9b33988250ed3c23518136a41c26e22623fdb8124381\"" Sep 13 01:34:56.760227 env[1475]: time="2025-09-13T01:34:56.760196515Z" level=info msg="StartContainer for \"848dab0b0e2846e047ec9b33988250ed3c23518136a41c26e22623fdb8124381\"" Sep 13 01:34:56.772361 env[1475]: time="2025-09-13T01:34:56.772317529Z" level=info msg="CreateContainer within sandbox \"c7e835e6b6139a657451f2e4b46f48acfc6bfa76cac5020eada2e42cc8e8fca6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"977eb111082728a273712473d81c8e77886198ff956687a7300ed4ccd26ca55a\"" Sep 13 01:34:56.773034 env[1475]: time="2025-09-13T01:34:56.773007209Z" level=info msg="StartContainer for \"977eb111082728a273712473d81c8e77886198ff956687a7300ed4ccd26ca55a\"" Sep 13 01:34:56.778026 systemd[1]: Started cri-containerd-848dab0b0e2846e047ec9b33988250ed3c23518136a41c26e22623fdb8124381.scope. Sep 13 01:34:56.788746 env[1475]: time="2025-09-13T01:34:56.788655627Z" level=info msg="CreateContainer within sandbox \"bb18f61169c6dc25916f6be6ad1e56a49f86aa13490d2f084878bbb223fccdba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f89d805b69bad029ffd23e32b57964c4afdb4ba06d20f8432477a691113de846\"" Sep 13 01:34:56.789194 env[1475]: time="2025-09-13T01:34:56.789167787Z" level=info msg="StartContainer for \"f89d805b69bad029ffd23e32b57964c4afdb4ba06d20f8432477a691113de846\"" Sep 13 01:34:56.815055 systemd[1]: Started cri-containerd-977eb111082728a273712473d81c8e77886198ff956687a7300ed4ccd26ca55a.scope. Sep 13 01:34:56.822581 systemd[1]: Started cri-containerd-f89d805b69bad029ffd23e32b57964c4afdb4ba06d20f8432477a691113de846.scope. Sep 13 01:34:56.842059 kubelet[2062]: I0913 01:34:56.841020 2062 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:56.842059 kubelet[2062]: E0913 01:34:56.841325 2062 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.15:6443/api/v1/nodes\": dial tcp 10.200.20.15:6443: connect: connection refused" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:56.842384 env[1475]: time="2025-09-13T01:34:56.842349367Z" level=info msg="StartContainer for \"848dab0b0e2846e047ec9b33988250ed3c23518136a41c26e22623fdb8124381\" returns successfully" Sep 13 01:34:56.844271 kubelet[2062]: E0913 01:34:56.843717 2062 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.15:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:56.867890 env[1475]: time="2025-09-13T01:34:56.867841236Z" level=info msg="StartContainer for \"977eb111082728a273712473d81c8e77886198ff956687a7300ed4ccd26ca55a\" returns successfully" Sep 13 01:34:56.913617 env[1475]: time="2025-09-13T01:34:56.913564367Z" level=info msg="StartContainer for \"f89d805b69bad029ffd23e32b57964c4afdb4ba06d20f8432477a691113de846\" returns successfully" Sep 13 01:34:57.386664 systemd[1]: run-containerd-runc-k8s.io-7c1fb09fc2a58db66b824c63e45d46bd5aadda3b703f8c121236c750e03b7d7c-runc.CmIBGR.mount: Deactivated successfully. Sep 13 01:34:58.443265 kubelet[2062]: I0913 01:34:58.443229 2062 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:59.418389 kubelet[2062]: E0913 01:34:59.418353 2062 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-49eff79a60\" not found" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:59.545296 kubelet[2062]: I0913 01:34:59.545257 2062 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:34:59.545296 kubelet[2062]: E0913 01:34:59.545298 2062 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-49eff79a60\": node \"ci-3510.3.8-n-49eff79a60\" not found" Sep 13 01:34:59.847560 kubelet[2062]: I0913 01:34:59.847535 2062 apiserver.go:52] "Watching apiserver" Sep 13 01:34:59.880293 kubelet[2062]: I0913 01:34:59.880259 2062 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:35:01.532116 kubelet[2062]: W0913 01:35:01.532089 2062 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:01.645685 systemd[1]: Reloading. Sep 13 01:35:01.742916 /usr/lib/systemd/system-generators/torcx-generator[2357]: time="2025-09-13T01:35:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:35:01.742942 /usr/lib/systemd/system-generators/torcx-generator[2357]: time="2025-09-13T01:35:01Z" level=info msg="torcx already run" Sep 13 01:35:01.828640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:35:01.828660 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:35:01.846582 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:35:01.963120 systemd[1]: Stopping kubelet.service... Sep 13 01:35:01.979814 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:35:01.979992 systemd[1]: Stopped kubelet.service. Sep 13 01:35:01.980039 systemd[1]: kubelet.service: Consumed 1.181s CPU time. Sep 13 01:35:01.981681 systemd[1]: Starting kubelet.service... Sep 13 01:35:02.160537 systemd[1]: Started kubelet.service. Sep 13 01:35:02.219994 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:35:02.219994 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:35:02.219994 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:35:02.220381 kubelet[2420]: I0913 01:35:02.220036 2420 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:35:02.230701 kubelet[2420]: I0913 01:35:02.230665 2420 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:35:02.230701 kubelet[2420]: I0913 01:35:02.230692 2420 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:35:02.230905 kubelet[2420]: I0913 01:35:02.230886 2420 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:35:02.233340 kubelet[2420]: I0913 01:35:02.233314 2420 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:35:02.235127 kubelet[2420]: I0913 01:35:02.235104 2420 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:35:02.246918 kubelet[2420]: E0913 01:35:02.246840 2420 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:35:02.247109 kubelet[2420]: I0913 01:35:02.247094 2420 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:35:02.250346 kubelet[2420]: I0913 01:35:02.250327 2420 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:35:02.250560 kubelet[2420]: I0913 01:35:02.250548 2420 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:35:02.250853 kubelet[2420]: I0913 01:35:02.250827 2420 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:35:02.251163 kubelet[2420]: I0913 01:35:02.250974 2420 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-49eff79a60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:35:02.251353 kubelet[2420]: I0913 01:35:02.251327 2420 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:35:02.251433 kubelet[2420]: I0913 01:35:02.251410 2420 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:35:02.251542 kubelet[2420]: I0913 01:35:02.251532 2420 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:35:02.251729 kubelet[2420]: I0913 01:35:02.251720 2420 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:35:02.252261 kubelet[2420]: I0913 01:35:02.252238 2420 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:35:02.252389 kubelet[2420]: I0913 01:35:02.252377 2420 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:35:02.252465 kubelet[2420]: I0913 01:35:02.252449 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:35:02.258158 kubelet[2420]: I0913 01:35:02.258137 2420 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:35:02.258734 kubelet[2420]: I0913 01:35:02.258718 2420 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:35:02.259277 kubelet[2420]: I0913 01:35:02.259150 2420 server.go:1274] "Started kubelet" Sep 13 01:35:02.261122 kubelet[2420]: I0913 01:35:02.261104 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:35:02.269883 kubelet[2420]: I0913 01:35:02.269848 2420 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:35:02.272258 kubelet[2420]: I0913 01:35:02.272230 2420 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:35:02.274954 kubelet[2420]: I0913 01:35:02.274912 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:35:02.275285 kubelet[2420]: I0913 01:35:02.275272 2420 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:35:02.275701 kubelet[2420]: I0913 01:35:02.275686 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:35:02.276881 kubelet[2420]: I0913 01:35:02.276868 2420 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:35:02.282997 kubelet[2420]: I0913 01:35:02.279902 2420 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:35:02.282997 kubelet[2420]: I0913 01:35:02.279990 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:35:02.282997 kubelet[2420]: I0913 01:35:02.282096 2420 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:35:02.285416 kubelet[2420]: I0913 01:35:02.285401 2420 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:35:02.285602 kubelet[2420]: I0913 01:35:02.285592 2420 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:35:02.286884 kubelet[2420]: I0913 01:35:02.286862 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:35:02.289228 kubelet[2420]: I0913 01:35:02.289206 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:35:02.289367 kubelet[2420]: I0913 01:35:02.289355 2420 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:35:02.289447 kubelet[2420]: I0913 01:35:02.289438 2420 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:35:02.289552 kubelet[2420]: E0913 01:35:02.289535 2420 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:35:02.316392 kubelet[2420]: E0913 01:35:02.316366 2420 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:35:02.347623 kubelet[2420]: I0913 01:35:02.347600 2420 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:35:02.347785 kubelet[2420]: I0913 01:35:02.347772 2420 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:35:02.347847 kubelet[2420]: I0913 01:35:02.347838 2420 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:35:02.348046 kubelet[2420]: I0913 01:35:02.348033 2420 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:35:02.348126 kubelet[2420]: I0913 01:35:02.348103 2420 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:35:02.348180 kubelet[2420]: I0913 01:35:02.348171 2420 policy_none.go:49] "None policy: Start" Sep 13 01:35:02.348750 kubelet[2420]: I0913 01:35:02.348737 2420 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:35:02.348848 kubelet[2420]: I0913 01:35:02.348839 2420 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:35:02.349029 kubelet[2420]: I0913 01:35:02.349018 2420 state_mem.go:75] "Updated machine memory state" Sep 13 01:35:02.353739 kubelet[2420]: I0913 01:35:02.353430 2420 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:35:02.353859 kubelet[2420]: I0913 01:35:02.353845 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:35:02.353890 kubelet[2420]: I0913 01:35:02.353858 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:35:02.354512 kubelet[2420]: I0913 01:35:02.354211 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:35:02.403657 kubelet[2420]: W0913 01:35:02.402781 2420 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:02.403657 kubelet[2420]: E0913 01:35:02.402850 2420 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-49eff79a60\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.403657 kubelet[2420]: W0913 01:35:02.403232 2420 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:02.403923 kubelet[2420]: W0913 01:35:02.403887 2420 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:02.461616 kubelet[2420]: I0913 01:35:02.461587 2420 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.480233 kubelet[2420]: I0913 01:35:02.480190 2420 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.480382 kubelet[2420]: I0913 01:35:02.480297 2420 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.586807 kubelet[2420]: I0913 01:35:02.586761 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.586959 kubelet[2420]: I0913 01:35:02.586855 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.586959 kubelet[2420]: I0913 01:35:02.586877 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.586959 kubelet[2420]: I0913 01:35:02.586896 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16532c94c5a9b6d11794c6a3a3dbc839-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-49eff79a60\" (UID: \"16532c94c5a9b6d11794c6a3a3dbc839\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.586959 kubelet[2420]: I0913 01:35:02.586946 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdd7292d9387f24ec8462afe5aba1676-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" (UID: \"cdd7292d9387f24ec8462afe5aba1676\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.587067 kubelet[2420]: I0913 01:35:02.586961 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdd7292d9387f24ec8462afe5aba1676-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" (UID: \"cdd7292d9387f24ec8462afe5aba1676\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.587067 kubelet[2420]: I0913 01:35:02.586976 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.587067 kubelet[2420]: I0913 01:35:02.586992 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad4dab777cc595c3599f57cd791303f0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-49eff79a60\" (UID: \"ad4dab777cc595c3599f57cd791303f0\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.587067 kubelet[2420]: I0913 01:35:02.587035 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdd7292d9387f24ec8462afe5aba1676-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" (UID: \"cdd7292d9387f24ec8462afe5aba1676\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:02.709173 sudo[2451]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:35:02.709446 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 01:35:03.187856 sudo[2451]: pam_unix(sudo:session): session closed for user root Sep 13 01:35:03.253097 kubelet[2420]: I0913 01:35:03.253046 2420 apiserver.go:52] "Watching apiserver" Sep 13 01:35:03.285964 kubelet[2420]: I0913 01:35:03.285922 2420 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:35:03.348047 kubelet[2420]: W0913 01:35:03.348006 2420 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:35:03.348199 kubelet[2420]: E0913 01:35:03.348076 2420 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-49eff79a60\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" Sep 13 01:35:03.364801 kubelet[2420]: I0913 01:35:03.364747 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-49eff79a60" podStartSLOduration=2.364729152 podStartE2EDuration="2.364729152s" podCreationTimestamp="2025-09-13 01:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:03.353382701 +0000 UTC m=+1.187917293" watchObservedRunningTime="2025-09-13 01:35:03.364729152 +0000 UTC m=+1.199263744" Sep 13 01:35:03.376022 kubelet[2420]: I0913 01:35:03.375971 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-49eff79a60" podStartSLOduration=1.375954122 podStartE2EDuration="1.375954122s" podCreationTimestamp="2025-09-13 01:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:03.365220992 +0000 UTC m=+1.199755544" watchObservedRunningTime="2025-09-13 01:35:03.375954122 +0000 UTC m=+1.210488714" Sep 13 01:35:03.388791 kubelet[2420]: I0913 01:35:03.388744 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-49eff79a60" podStartSLOduration=1.388730094 podStartE2EDuration="1.388730094s" podCreationTimestamp="2025-09-13 01:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:03.376438283 +0000 UTC m=+1.210972875" watchObservedRunningTime="2025-09-13 01:35:03.388730094 +0000 UTC m=+1.223264686" Sep 13 01:35:04.933075 sudo[1767]: pam_unix(sudo:session): session closed for user root Sep 13 01:35:05.031726 sshd[1764]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:05.034098 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:35:05.034275 systemd[1]: session-7.scope: Consumed 7.581s CPU time. Sep 13 01:35:05.034685 systemd[1]: sshd@4-10.200.20.15:22-10.200.16.10:57346.service: Deactivated successfully. Sep 13 01:35:05.036277 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:35:05.037183 systemd-logind[1463]: Removed session 7. Sep 13 01:35:06.291346 kubelet[2420]: I0913 01:35:06.291316 2420 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:35:06.292033 env[1475]: time="2025-09-13T01:35:06.291999363Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:35:06.292632 kubelet[2420]: I0913 01:35:06.292606 2420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:35:07.208730 systemd[1]: Created slice kubepods-besteffort-pod8d4d5e85_ba89_4fb5_a475_d7473628fbb7.slice. Sep 13 01:35:07.219771 systemd[1]: Created slice kubepods-burstable-pod44e9af50_5481_40e7_b2ea_2d436e49614b.slice. Sep 13 01:35:07.309657 kubelet[2420]: I0913 01:35:07.309622 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-bpf-maps\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310100 kubelet[2420]: I0913 01:35:07.310080 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-kernel\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310217 kubelet[2420]: I0913 01:35:07.310202 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d4d5e85-ba89-4fb5-a475-d7473628fbb7-lib-modules\") pod \"kube-proxy-84gmr\" (UID: \"8d4d5e85-ba89-4fb5-a475-d7473628fbb7\") " pod="kube-system/kube-proxy-84gmr" Sep 13 01:35:07.310332 kubelet[2420]: I0913 01:35:07.310314 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4jq\" (UniqueName: \"kubernetes.io/projected/8d4d5e85-ba89-4fb5-a475-d7473628fbb7-kube-api-access-mb4jq\") pod \"kube-proxy-84gmr\" (UID: \"8d4d5e85-ba89-4fb5-a475-d7473628fbb7\") " pod="kube-system/kube-proxy-84gmr" Sep 13 01:35:07.310424 kubelet[2420]: I0913 01:35:07.310411 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cni-path\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310507 kubelet[2420]: I0913 01:35:07.310495 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-lib-modules\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310594 kubelet[2420]: I0913 01:35:07.310581 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-xtables-lock\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310698 kubelet[2420]: I0913 01:35:07.310680 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d4d5e85-ba89-4fb5-a475-d7473628fbb7-kube-proxy\") pod \"kube-proxy-84gmr\" (UID: \"8d4d5e85-ba89-4fb5-a475-d7473628fbb7\") " pod="kube-system/kube-proxy-84gmr" Sep 13 01:35:07.310795 kubelet[2420]: I0913 01:35:07.310774 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-hostproc\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310884 kubelet[2420]: I0913 01:35:07.310871 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-hubble-tls\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.310973 kubelet[2420]: I0913 01:35:07.310960 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-run\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.311056 kubelet[2420]: I0913 01:35:07.311044 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d4d5e85-ba89-4fb5-a475-d7473628fbb7-xtables-lock\") pod \"kube-proxy-84gmr\" (UID: \"8d4d5e85-ba89-4fb5-a475-d7473628fbb7\") " pod="kube-system/kube-proxy-84gmr" Sep 13 01:35:07.311141 kubelet[2420]: I0913 01:35:07.311129 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-cgroup\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.311223 kubelet[2420]: I0913 01:35:07.311211 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-etc-cni-netd\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.311329 kubelet[2420]: I0913 01:35:07.311315 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44e9af50-5481-40e7-b2ea-2d436e49614b-clustermesh-secrets\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.311413 kubelet[2420]: I0913 01:35:07.311400 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-config-path\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.311483 kubelet[2420]: I0913 01:35:07.311472 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-net\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.311555 kubelet[2420]: I0913 01:35:07.311543 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h22sg\" (UniqueName: \"kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-kube-api-access-h22sg\") pod \"cilium-q4l7k\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " pod="kube-system/cilium-q4l7k" Sep 13 01:35:07.407336 systemd[1]: Created slice kubepods-besteffort-pod7dc7d788_7440_48cf_94c8_a67c51714ad2.slice. Sep 13 01:35:07.411943 kubelet[2420]: I0913 01:35:07.411916 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxkh\" (UniqueName: \"kubernetes.io/projected/7dc7d788-7440-48cf-94c8-a67c51714ad2-kube-api-access-rwxkh\") pod \"cilium-operator-5d85765b45-s6gp5\" (UID: \"7dc7d788-7440-48cf-94c8-a67c51714ad2\") " pod="kube-system/cilium-operator-5d85765b45-s6gp5" Sep 13 01:35:07.412103 kubelet[2420]: I0913 01:35:07.412086 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7dc7d788-7440-48cf-94c8-a67c51714ad2-cilium-config-path\") pod \"cilium-operator-5d85765b45-s6gp5\" (UID: \"7dc7d788-7440-48cf-94c8-a67c51714ad2\") " pod="kube-system/cilium-operator-5d85765b45-s6gp5" Sep 13 01:35:07.412560 kubelet[2420]: I0913 01:35:07.412523 2420 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:35:07.517448 env[1475]: time="2025-09-13T01:35:07.517332696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84gmr,Uid:8d4d5e85-ba89-4fb5-a475-d7473628fbb7,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:07.525234 env[1475]: time="2025-09-13T01:35:07.525013062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4l7k,Uid:44e9af50-5481-40e7-b2ea-2d436e49614b,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:07.588785 env[1475]: time="2025-09-13T01:35:07.588617556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:07.588785 env[1475]: time="2025-09-13T01:35:07.588654276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:07.588785 env[1475]: time="2025-09-13T01:35:07.588667116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:07.589878 env[1475]: time="2025-09-13T01:35:07.589033917Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22f0310bed090faaacdf043764f26156582fc31aa5df64f58c803225acf7ff45 pid=2501 runtime=io.containerd.runc.v2 Sep 13 01:35:07.598676 systemd[1]: Started cri-containerd-22f0310bed090faaacdf043764f26156582fc31aa5df64f58c803225acf7ff45.scope. Sep 13 01:35:07.603809 env[1475]: time="2025-09-13T01:35:07.603729569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:07.603809 env[1475]: time="2025-09-13T01:35:07.603781129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:07.603976 env[1475]: time="2025-09-13T01:35:07.603791929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:07.604155 env[1475]: time="2025-09-13T01:35:07.604121769Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446 pid=2528 runtime=io.containerd.runc.v2 Sep 13 01:35:07.615436 systemd[1]: Started cri-containerd-284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446.scope. Sep 13 01:35:07.632498 env[1475]: time="2025-09-13T01:35:07.632447793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84gmr,Uid:8d4d5e85-ba89-4fb5-a475-d7473628fbb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"22f0310bed090faaacdf043764f26156582fc31aa5df64f58c803225acf7ff45\"" Sep 13 01:35:07.638196 env[1475]: time="2025-09-13T01:35:07.638159278Z" level=info msg="CreateContainer within sandbox \"22f0310bed090faaacdf043764f26156582fc31aa5df64f58c803225acf7ff45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:35:07.650362 env[1475]: time="2025-09-13T01:35:07.650315408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4l7k,Uid:44e9af50-5481-40e7-b2ea-2d436e49614b,Namespace:kube-system,Attempt:0,} returns sandbox id \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\"" Sep 13 01:35:07.652425 env[1475]: time="2025-09-13T01:35:07.651672650Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:35:07.710411 env[1475]: time="2025-09-13T01:35:07.710366739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s6gp5,Uid:7dc7d788-7440-48cf-94c8-a67c51714ad2,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:07.917138 env[1475]: time="2025-09-13T01:35:07.916668954Z" level=info msg="CreateContainer within sandbox \"22f0310bed090faaacdf043764f26156582fc31aa5df64f58c803225acf7ff45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"354a52fddfa0df89f794550eb263dbdc1a2502c4e71a20b42936f0b11c16b145\"" Sep 13 01:35:07.918554 env[1475]: time="2025-09-13T01:35:07.918521836Z" level=info msg="StartContainer for \"354a52fddfa0df89f794550eb263dbdc1a2502c4e71a20b42936f0b11c16b145\"" Sep 13 01:35:07.934917 systemd[1]: Started cri-containerd-354a52fddfa0df89f794550eb263dbdc1a2502c4e71a20b42936f0b11c16b145.scope. Sep 13 01:35:07.962337 env[1475]: time="2025-09-13T01:35:07.962223793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:07.962337 env[1475]: time="2025-09-13T01:35:07.962271433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:07.962337 env[1475]: time="2025-09-13T01:35:07.962281553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:07.962540 env[1475]: time="2025-09-13T01:35:07.962423633Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334 pid=2609 runtime=io.containerd.runc.v2 Sep 13 01:35:07.979123 env[1475]: time="2025-09-13T01:35:07.979077047Z" level=info msg="StartContainer for \"354a52fddfa0df89f794550eb263dbdc1a2502c4e71a20b42936f0b11c16b145\" returns successfully" Sep 13 01:35:07.990222 systemd[1]: Started cri-containerd-4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334.scope. Sep 13 01:35:08.021858 env[1475]: time="2025-09-13T01:35:08.021818163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s6gp5,Uid:7dc7d788-7440-48cf-94c8-a67c51714ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334\"" Sep 13 01:35:09.579660 kubelet[2420]: I0913 01:35:09.579551 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-84gmr" podStartSLOduration=2.579533559 podStartE2EDuration="2.579533559s" podCreationTimestamp="2025-09-13 01:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:08.371992812 +0000 UTC m=+6.206527404" watchObservedRunningTime="2025-09-13 01:35:09.579533559 +0000 UTC m=+7.414068151" Sep 13 01:35:12.457415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296363264.mount: Deactivated successfully. Sep 13 01:35:15.085706 env[1475]: time="2025-09-13T01:35:15.085652677Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:15.097836 env[1475]: time="2025-09-13T01:35:15.097794446Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:15.102142 env[1475]: time="2025-09-13T01:35:15.102111049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:15.102675 env[1475]: time="2025-09-13T01:35:15.102644529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 01:35:15.106437 env[1475]: time="2025-09-13T01:35:15.106391372Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:35:15.107165 env[1475]: time="2025-09-13T01:35:15.107120932Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:35:15.132468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361519920.mount: Deactivated successfully. Sep 13 01:35:15.137817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332054714.mount: Deactivated successfully. Sep 13 01:35:15.153924 env[1475]: time="2025-09-13T01:35:15.153873005Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\"" Sep 13 01:35:15.154473 env[1475]: time="2025-09-13T01:35:15.154446885Z" level=info msg="StartContainer for \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\"" Sep 13 01:35:15.172654 systemd[1]: Started cri-containerd-a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d.scope. Sep 13 01:35:15.206430 env[1475]: time="2025-09-13T01:35:15.206384882Z" level=info msg="StartContainer for \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\" returns successfully" Sep 13 01:35:15.211669 systemd[1]: cri-containerd-a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d.scope: Deactivated successfully. Sep 13 01:35:16.131042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d-rootfs.mount: Deactivated successfully. Sep 13 01:35:16.490274 env[1475]: time="2025-09-13T01:35:16.490190933Z" level=info msg="shim disconnected" id=a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d Sep 13 01:35:16.490274 env[1475]: time="2025-09-13T01:35:16.490234333Z" level=warning msg="cleaning up after shim disconnected" id=a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d namespace=k8s.io Sep 13 01:35:16.490829 env[1475]: time="2025-09-13T01:35:16.490648374Z" level=info msg="cleaning up dead shim" Sep 13 01:35:16.497865 env[1475]: time="2025-09-13T01:35:16.497824059Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2837 runtime=io.containerd.runc.v2\n" Sep 13 01:35:17.372273 env[1475]: time="2025-09-13T01:35:17.366049327Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:35:17.407881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036164554.mount: Deactivated successfully. Sep 13 01:35:17.421911 env[1475]: time="2025-09-13T01:35:17.421858765Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\"" Sep 13 01:35:17.423360 env[1475]: time="2025-09-13T01:35:17.423318486Z" level=info msg="StartContainer for \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\"" Sep 13 01:35:17.445278 systemd[1]: run-containerd-runc-k8s.io-fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359-runc.DfBN9F.mount: Deactivated successfully. Sep 13 01:35:17.446651 systemd[1]: Started cri-containerd-fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359.scope. Sep 13 01:35:17.482358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:35:17.482547 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:35:17.482698 systemd[1]: Stopping systemd-sysctl.service... Sep 13 01:35:17.486050 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:35:17.488481 systemd[1]: cri-containerd-fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359.scope: Deactivated successfully. Sep 13 01:35:17.493398 env[1475]: time="2025-09-13T01:35:17.493351572Z" level=info msg="StartContainer for \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\" returns successfully" Sep 13 01:35:17.493819 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:35:17.529979 env[1475]: time="2025-09-13T01:35:17.529935237Z" level=info msg="shim disconnected" id=fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359 Sep 13 01:35:17.530319 env[1475]: time="2025-09-13T01:35:17.530298397Z" level=warning msg="cleaning up after shim disconnected" id=fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359 namespace=k8s.io Sep 13 01:35:17.530462 env[1475]: time="2025-09-13T01:35:17.530446397Z" level=info msg="cleaning up dead shim" Sep 13 01:35:17.537289 env[1475]: time="2025-09-13T01:35:17.537236162Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2898 runtime=io.containerd.runc.v2\n" Sep 13 01:35:18.367859 env[1475]: time="2025-09-13T01:35:18.367717752Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:35:18.403944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359-rootfs.mount: Deactivated successfully. Sep 13 01:35:18.423922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533648084.mount: Deactivated successfully. Sep 13 01:35:18.431125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584169595.mount: Deactivated successfully. Sep 13 01:35:18.451916 env[1475]: time="2025-09-13T01:35:18.451873207Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\"" Sep 13 01:35:18.454309 env[1475]: time="2025-09-13T01:35:18.454279209Z" level=info msg="StartContainer for \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\"" Sep 13 01:35:18.489001 systemd[1]: Started cri-containerd-a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5.scope. Sep 13 01:35:18.524091 systemd[1]: cri-containerd-a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5.scope: Deactivated successfully. Sep 13 01:35:18.530738 env[1475]: time="2025-09-13T01:35:18.530698619Z" level=info msg="StartContainer for \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\" returns successfully" Sep 13 01:35:18.819732 env[1475]: time="2025-09-13T01:35:18.819685368Z" level=info msg="shim disconnected" id=a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5 Sep 13 01:35:18.819732 env[1475]: time="2025-09-13T01:35:18.819728888Z" level=warning msg="cleaning up after shim disconnected" id=a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5 namespace=k8s.io Sep 13 01:35:18.819732 env[1475]: time="2025-09-13T01:35:18.819738288Z" level=info msg="cleaning up dead shim" Sep 13 01:35:18.833664 env[1475]: time="2025-09-13T01:35:18.833617817Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2955 runtime=io.containerd.runc.v2\n" Sep 13 01:35:18.932656 env[1475]: time="2025-09-13T01:35:18.932614122Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:18.940020 env[1475]: time="2025-09-13T01:35:18.939989047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:18.945313 env[1475]: time="2025-09-13T01:35:18.945286410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:35:18.946067 env[1475]: time="2025-09-13T01:35:18.946035931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 01:35:18.950134 env[1475]: time="2025-09-13T01:35:18.950106253Z" level=info msg="CreateContainer within sandbox \"4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:35:18.990686 env[1475]: time="2025-09-13T01:35:18.990634680Z" level=info msg="CreateContainer within sandbox \"4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\"" Sep 13 01:35:18.991879 env[1475]: time="2025-09-13T01:35:18.991115520Z" level=info msg="StartContainer for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\"" Sep 13 01:35:19.006912 systemd[1]: Started cri-containerd-265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde.scope. Sep 13 01:35:19.040497 env[1475]: time="2025-09-13T01:35:19.040442832Z" level=info msg="StartContainer for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" returns successfully" Sep 13 01:35:19.369846 env[1475]: time="2025-09-13T01:35:19.369804723Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:35:19.424000 env[1475]: time="2025-09-13T01:35:19.423941557Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\"" Sep 13 01:35:19.424787 env[1475]: time="2025-09-13T01:35:19.424764278Z" level=info msg="StartContainer for \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\"" Sep 13 01:35:19.449436 systemd[1]: Started cri-containerd-aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128.scope. Sep 13 01:35:19.505552 systemd[1]: cri-containerd-aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128.scope: Deactivated successfully. Sep 13 01:35:19.509666 kubelet[2420]: I0913 01:35:19.509595 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-s6gp5" podStartSLOduration=1.585640765 podStartE2EDuration="12.509576412s" podCreationTimestamp="2025-09-13 01:35:07 +0000 UTC" firstStartedPulling="2025-09-13 01:35:08.023200444 +0000 UTC m=+5.857735036" lastFinishedPulling="2025-09-13 01:35:18.947136131 +0000 UTC m=+16.781670683" observedRunningTime="2025-09-13 01:35:19.435301284 +0000 UTC m=+17.269835876" watchObservedRunningTime="2025-09-13 01:35:19.509576412 +0000 UTC m=+17.344111044" Sep 13 01:35:19.511834 env[1475]: time="2025-09-13T01:35:19.511793933Z" level=info msg="StartContainer for \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\" returns successfully" Sep 13 01:35:19.572983 env[1475]: time="2025-09-13T01:35:19.572938892Z" level=info msg="shim disconnected" id=aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128 Sep 13 01:35:19.573502 env[1475]: time="2025-09-13T01:35:19.573479333Z" level=warning msg="cleaning up after shim disconnected" id=aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128 namespace=k8s.io Sep 13 01:35:19.573597 env[1475]: time="2025-09-13T01:35:19.573583253Z" level=info msg="cleaning up dead shim" Sep 13 01:35:19.583740 env[1475]: time="2025-09-13T01:35:19.583701739Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:35:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\n" Sep 13 01:35:20.375015 env[1475]: time="2025-09-13T01:35:20.374967721Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:35:20.404002 systemd[1]: run-containerd-runc-k8s.io-aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128-runc.HlCmMA.mount: Deactivated successfully. Sep 13 01:35:20.404096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128-rootfs.mount: Deactivated successfully. Sep 13 01:35:20.432780 env[1475]: time="2025-09-13T01:35:20.432731837Z" level=info msg="CreateContainer within sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\"" Sep 13 01:35:20.433291 env[1475]: time="2025-09-13T01:35:20.433266677Z" level=info msg="StartContainer for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\"" Sep 13 01:35:20.453594 systemd[1]: run-containerd-runc-k8s.io-b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87-runc.XpdPXV.mount: Deactivated successfully. Sep 13 01:35:20.456576 systemd[1]: Started cri-containerd-b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87.scope. Sep 13 01:35:20.500159 env[1475]: time="2025-09-13T01:35:20.498035078Z" level=info msg="StartContainer for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" returns successfully" Sep 13 01:35:20.600308 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:35:20.601484 kubelet[2420]: I0913 01:35:20.600744 2420 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 01:35:20.640162 systemd[1]: Created slice kubepods-burstable-pod6504e50c_1684_4c0c_a591_5ed9b4788fcb.slice. Sep 13 01:35:20.649615 systemd[1]: Created slice kubepods-burstable-pod1503067e_8edd_4ca1_b556_ce79bd36182f.slice. Sep 13 01:35:20.699378 kubelet[2420]: I0913 01:35:20.699346 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6504e50c-1684-4c0c-a591-5ed9b4788fcb-config-volume\") pod \"coredns-7c65d6cfc9-m7p6w\" (UID: \"6504e50c-1684-4c0c-a591-5ed9b4788fcb\") " pod="kube-system/coredns-7c65d6cfc9-m7p6w" Sep 13 01:35:20.699576 kubelet[2420]: I0913 01:35:20.699558 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psbtg\" (UniqueName: \"kubernetes.io/projected/6504e50c-1684-4c0c-a591-5ed9b4788fcb-kube-api-access-psbtg\") pod \"coredns-7c65d6cfc9-m7p6w\" (UID: \"6504e50c-1684-4c0c-a591-5ed9b4788fcb\") " pod="kube-system/coredns-7c65d6cfc9-m7p6w" Sep 13 01:35:20.699675 kubelet[2420]: I0913 01:35:20.699659 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1503067e-8edd-4ca1-b556-ce79bd36182f-config-volume\") pod \"coredns-7c65d6cfc9-nzndc\" (UID: \"1503067e-8edd-4ca1-b556-ce79bd36182f\") " pod="kube-system/coredns-7c65d6cfc9-nzndc" Sep 13 01:35:20.699763 kubelet[2420]: I0913 01:35:20.699751 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwbn\" (UniqueName: \"kubernetes.io/projected/1503067e-8edd-4ca1-b556-ce79bd36182f-kube-api-access-jqwbn\") pod \"coredns-7c65d6cfc9-nzndc\" (UID: \"1503067e-8edd-4ca1-b556-ce79bd36182f\") " pod="kube-system/coredns-7c65d6cfc9-nzndc" Sep 13 01:35:20.944340 env[1475]: time="2025-09-13T01:35:20.943995957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m7p6w,Uid:6504e50c-1684-4c0c-a591-5ed9b4788fcb,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:20.952459 env[1475]: time="2025-09-13T01:35:20.952423802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nzndc,Uid:1503067e-8edd-4ca1-b556-ce79bd36182f,Namespace:kube-system,Attempt:0,}" Sep 13 01:35:21.222284 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:35:21.395165 kubelet[2420]: I0913 01:35:21.394812 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q4l7k" podStartSLOduration=6.942075873 podStartE2EDuration="14.394796194s" podCreationTimestamp="2025-09-13 01:35:07 +0000 UTC" firstStartedPulling="2025-09-13 01:35:07.651262609 +0000 UTC m=+5.485797201" lastFinishedPulling="2025-09-13 01:35:15.10398293 +0000 UTC m=+12.938517522" observedRunningTime="2025-09-13 01:35:21.394197873 +0000 UTC m=+19.228732465" watchObservedRunningTime="2025-09-13 01:35:21.394796194 +0000 UTC m=+19.229330786" Sep 13 01:35:22.932938 systemd-networkd[1638]: cilium_host: Link UP Sep 13 01:35:22.933819 systemd-networkd[1638]: cilium_net: Link UP Sep 13 01:35:22.945967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 01:35:22.946167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 01:35:22.948326 systemd-networkd[1638]: cilium_net: Gained carrier Sep 13 01:35:22.948523 systemd-networkd[1638]: cilium_host: Gained carrier Sep 13 01:35:23.157456 systemd-networkd[1638]: cilium_vxlan: Link UP Sep 13 01:35:23.157463 systemd-networkd[1638]: cilium_vxlan: Gained carrier Sep 13 01:35:23.442285 kernel: NET: Registered PF_ALG protocol family Sep 13 01:35:23.815378 systemd-networkd[1638]: cilium_net: Gained IPv6LL Sep 13 01:35:23.944443 systemd-networkd[1638]: cilium_host: Gained IPv6LL Sep 13 01:35:24.333996 systemd-networkd[1638]: lxc_health: Link UP Sep 13 01:35:24.351458 systemd-networkd[1638]: lxc_health: Gained carrier Sep 13 01:35:24.354630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:35:24.539956 systemd-networkd[1638]: lxcb0769ff69f6d: Link UP Sep 13 01:35:24.548696 systemd-networkd[1638]: lxcaf62c6ce64ff: Link UP Sep 13 01:35:24.557270 kernel: eth0: renamed from tmp8f8cf Sep 13 01:35:24.567304 kernel: eth0: renamed from tmp7d95b Sep 13 01:35:24.587318 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb0769ff69f6d: link becomes ready Sep 13 01:35:24.585472 systemd-networkd[1638]: lxcb0769ff69f6d: Gained carrier Sep 13 01:35:24.595295 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaf62c6ce64ff: link becomes ready Sep 13 01:35:24.597425 systemd-networkd[1638]: lxcaf62c6ce64ff: Gained carrier Sep 13 01:35:25.223429 systemd-networkd[1638]: cilium_vxlan: Gained IPv6LL Sep 13 01:35:25.799372 systemd-networkd[1638]: lxc_health: Gained IPv6LL Sep 13 01:35:25.928371 systemd-networkd[1638]: lxcb0769ff69f6d: Gained IPv6LL Sep 13 01:35:25.992369 systemd-networkd[1638]: lxcaf62c6ce64ff: Gained IPv6LL Sep 13 01:35:28.110060 env[1475]: time="2025-09-13T01:35:28.109995155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:28.110415 env[1475]: time="2025-09-13T01:35:28.110036395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:28.110415 env[1475]: time="2025-09-13T01:35:28.110047155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:28.110415 env[1475]: time="2025-09-13T01:35:28.110151755Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d95bda51a27123025981d1f26e11fdebf9e1f5e42077754e2cc1920fab61e61 pid=3597 runtime=io.containerd.runc.v2 Sep 13 01:35:28.120708 env[1475]: time="2025-09-13T01:35:28.120632881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:35:28.120841 env[1475]: time="2025-09-13T01:35:28.120678961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:35:28.120841 env[1475]: time="2025-09-13T01:35:28.120689681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:35:28.120841 env[1475]: time="2025-09-13T01:35:28.120798281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f8cfaa2b95211554d1bea3d454a3b2e1a9a30ffce275adffab924c433453abb pid=3612 runtime=io.containerd.runc.v2 Sep 13 01:35:28.125423 systemd[1]: Started cri-containerd-7d95bda51a27123025981d1f26e11fdebf9e1f5e42077754e2cc1920fab61e61.scope. Sep 13 01:35:28.145265 systemd[1]: Started cri-containerd-8f8cfaa2b95211554d1bea3d454a3b2e1a9a30ffce275adffab924c433453abb.scope. Sep 13 01:35:28.183872 env[1475]: time="2025-09-13T01:35:28.183833514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m7p6w,Uid:6504e50c-1684-4c0c-a591-5ed9b4788fcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f8cfaa2b95211554d1bea3d454a3b2e1a9a30ffce275adffab924c433453abb\"" Sep 13 01:35:28.202688 env[1475]: time="2025-09-13T01:35:28.202650804Z" level=info msg="CreateContainer within sandbox \"8f8cfaa2b95211554d1bea3d454a3b2e1a9a30ffce275adffab924c433453abb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:35:28.223118 env[1475]: time="2025-09-13T01:35:28.223070455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nzndc,Uid:1503067e-8edd-4ca1-b556-ce79bd36182f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d95bda51a27123025981d1f26e11fdebf9e1f5e42077754e2cc1920fab61e61\"" Sep 13 01:35:28.227150 env[1475]: time="2025-09-13T01:35:28.227119617Z" level=info msg="CreateContainer within sandbox \"7d95bda51a27123025981d1f26e11fdebf9e1f5e42077754e2cc1920fab61e61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:35:28.270977 env[1475]: time="2025-09-13T01:35:28.270924640Z" level=info msg="CreateContainer within sandbox \"8f8cfaa2b95211554d1bea3d454a3b2e1a9a30ffce275adffab924c433453abb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc46cc60626be228fbb3bca1d094be606789c438782d8ec88127221c35964e01\"" Sep 13 01:35:28.271768 env[1475]: time="2025-09-13T01:35:28.271743361Z" level=info msg="StartContainer for \"bc46cc60626be228fbb3bca1d094be606789c438782d8ec88127221c35964e01\"" Sep 13 01:35:28.293870 systemd[1]: Started cri-containerd-bc46cc60626be228fbb3bca1d094be606789c438782d8ec88127221c35964e01.scope. Sep 13 01:35:28.297108 env[1475]: time="2025-09-13T01:35:28.297026454Z" level=info msg="CreateContainer within sandbox \"7d95bda51a27123025981d1f26e11fdebf9e1f5e42077754e2cc1920fab61e61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ad9b6ba577013af24951658e37e6d7e46785ef283ca3a29e318e724f8449211\"" Sep 13 01:35:28.299225 env[1475]: time="2025-09-13T01:35:28.297948775Z" level=info msg="StartContainer for \"2ad9b6ba577013af24951658e37e6d7e46785ef283ca3a29e318e724f8449211\"" Sep 13 01:35:28.325986 systemd[1]: Started cri-containerd-2ad9b6ba577013af24951658e37e6d7e46785ef283ca3a29e318e724f8449211.scope. Sep 13 01:35:28.340502 env[1475]: time="2025-09-13T01:35:28.340451197Z" level=info msg="StartContainer for \"bc46cc60626be228fbb3bca1d094be606789c438782d8ec88127221c35964e01\" returns successfully" Sep 13 01:35:28.367778 env[1475]: time="2025-09-13T01:35:28.367681732Z" level=info msg="StartContainer for \"2ad9b6ba577013af24951658e37e6d7e46785ef283ca3a29e318e724f8449211\" returns successfully" Sep 13 01:35:28.425942 kubelet[2420]: I0913 01:35:28.425884 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nzndc" podStartSLOduration=21.425866122 podStartE2EDuration="21.425866122s" podCreationTimestamp="2025-09-13 01:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:28.424493682 +0000 UTC m=+26.259028274" watchObservedRunningTime="2025-09-13 01:35:28.425866122 +0000 UTC m=+26.260400714" Sep 13 01:35:29.114556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105702913.mount: Deactivated successfully. Sep 13 01:35:29.407584 kubelet[2420]: I0913 01:35:29.407453 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podStartSLOduration=22.407438557 podStartE2EDuration="22.407438557s" podCreationTimestamp="2025-09-13 01:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:35:28.442561731 +0000 UTC m=+26.277096323" watchObservedRunningTime="2025-09-13 01:35:29.407438557 +0000 UTC m=+27.241973109" Sep 13 01:36:40.648394 update_engine[1464]: I0913 01:36:40.648310 1464 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 01:36:40.648394 update_engine[1464]: I0913 01:36:40.648346 1464 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 01:36:40.648774 update_engine[1464]: I0913 01:36:40.648470 1464 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 01:36:40.649081 update_engine[1464]: I0913 01:36:40.648801 1464 omaha_request_params.cc:62] Current group set to lts Sep 13 01:36:40.649081 update_engine[1464]: I0913 01:36:40.648902 1464 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 01:36:40.649081 update_engine[1464]: I0913 01:36:40.648908 1464 update_attempter.cc:643] Scheduling an action processor start. Sep 13 01:36:40.649081 update_engine[1464]: I0913 01:36:40.648922 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 01:36:40.649081 update_engine[1464]: I0913 01:36:40.648960 1464 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 01:36:40.649447 locksmithd[1557]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 01:36:40.678930 update_engine[1464]: I0913 01:36:40.678895 1464 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 01:36:40.678930 update_engine[1464]: I0913 01:36:40.678921 1464 omaha_request_action.cc:271] Request: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: Sep 13 01:36:40.678930 update_engine[1464]: I0913 01:36:40.678927 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:40.776160 update_engine[1464]: I0913 01:36:40.776126 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:40.776464 update_engine[1464]: I0913 01:36:40.776360 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:40.882474 update_engine[1464]: E0913 01:36:40.882438 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:40.882613 update_engine[1464]: I0913 01:36:40.882545 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 01:36:51.583462 update_engine[1464]: I0913 01:36:51.583420 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:51.583829 update_engine[1464]: I0913 01:36:51.583623 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:51.583829 update_engine[1464]: I0913 01:36:51.583819 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:51.689471 update_engine[1464]: E0913 01:36:51.689421 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:51.689639 update_engine[1464]: I0913 01:36:51.689570 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 01:37:01.586568 update_engine[1464]: I0913 01:37:01.586524 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:37:01.586882 update_engine[1464]: I0913 01:37:01.586724 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:37:01.586913 update_engine[1464]: I0913 01:37:01.586901 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:37:01.592657 update_engine[1464]: E0913 01:37:01.592635 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:37:01.592754 update_engine[1464]: I0913 01:37:01.592721 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 01:37:05.363177 systemd[1]: Started sshd@5-10.200.20.15:22-10.200.16.10:36434.service. Sep 13 01:37:05.784591 sshd[3772]: Accepted publickey for core from 10.200.16.10 port 36434 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:05.786360 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:05.790936 systemd[1]: Started session-8.scope. Sep 13 01:37:05.791550 systemd-logind[1463]: New session 8 of user core. Sep 13 01:37:06.174762 sshd[3772]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:06.177973 systemd[1]: sshd@5-10.200.20.15:22-10.200.16.10:36434.service: Deactivated successfully. Sep 13 01:37:06.178725 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:37:06.179756 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:37:06.180548 systemd-logind[1463]: Removed session 8. Sep 13 01:37:11.242854 systemd[1]: Started sshd@6-10.200.20.15:22-10.200.16.10:50972.service. Sep 13 01:37:11.577889 update_engine[1464]: I0913 01:37:11.577775 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:37:11.578198 update_engine[1464]: I0913 01:37:11.578156 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:37:11.578389 update_engine[1464]: I0913 01:37:11.578365 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:37:11.584016 update_engine[1464]: E0913 01:37:11.583990 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:37:11.584123 update_engine[1464]: I0913 01:37:11.584072 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 01:37:11.584123 update_engine[1464]: I0913 01:37:11.584079 1464 omaha_request_action.cc:621] Omaha request response: Sep 13 01:37:11.584220 update_engine[1464]: E0913 01:37:11.584164 1464 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 13 01:37:11.584220 update_engine[1464]: I0913 01:37:11.584182 1464 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 01:37:11.584220 update_engine[1464]: I0913 01:37:11.584188 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:37:11.584220 update_engine[1464]: I0913 01:37:11.584193 1464 update_attempter.cc:306] Processing Done. Sep 13 01:37:11.584220 update_engine[1464]: E0913 01:37:11.584210 1464 update_attempter.cc:619] Update failed. Sep 13 01:37:11.584220 update_engine[1464]: I0913 01:37:11.584215 1464 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 01:37:11.584220 update_engine[1464]: I0913 01:37:11.584221 1464 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 01:37:11.584446 update_engine[1464]: I0913 01:37:11.584228 1464 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 01:37:11.584446 update_engine[1464]: I0913 01:37:11.584338 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 01:37:11.584446 update_engine[1464]: I0913 01:37:11.584357 1464 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 01:37:11.584446 update_engine[1464]: I0913 01:37:11.584360 1464 omaha_request_action.cc:271] Request: Sep 13 01:37:11.584446 update_engine[1464]: Sep 13 01:37:11.584446 update_engine[1464]: Sep 13 01:37:11.584446 update_engine[1464]: Sep 13 01:37:11.584446 update_engine[1464]: Sep 13 01:37:11.584446 update_engine[1464]: Sep 13 01:37:11.584446 update_engine[1464]: Sep 13 01:37:11.584446 update_engine[1464]: I0913 01:37:11.584364 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:37:11.584741 update_engine[1464]: I0913 01:37:11.584470 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:37:11.584741 update_engine[1464]: I0913 01:37:11.584614 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:37:11.584963 locksmithd[1557]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 01:37:11.618851 update_engine[1464]: E0913 01:37:11.618798 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618938 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618948 1464 omaha_request_action.cc:621] Omaha request response: Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618956 1464 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618961 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618966 1464 update_attempter.cc:306] Processing Done. Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618974 1464 update_attempter.cc:310] Error event sent. Sep 13 01:37:11.619011 update_engine[1464]: I0913 01:37:11.618985 1464 update_check_scheduler.cc:74] Next update check in 41m31s Sep 13 01:37:11.619336 locksmithd[1557]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 01:37:11.653330 sshd[3787]: Accepted publickey for core from 10.200.16.10 port 50972 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:11.654606 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:11.659044 systemd[1]: Started session-9.scope. Sep 13 01:37:11.659708 systemd-logind[1463]: New session 9 of user core. Sep 13 01:37:12.018055 sshd[3787]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:12.020600 systemd[1]: sshd@6-10.200.20.15:22-10.200.16.10:50972.service: Deactivated successfully. Sep 13 01:37:12.021340 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:37:12.022522 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:37:12.023467 systemd-logind[1463]: Removed session 9. Sep 13 01:37:17.090640 systemd[1]: Started sshd@7-10.200.20.15:22-10.200.16.10:50976.service. Sep 13 01:37:17.509779 sshd[3799]: Accepted publickey for core from 10.200.16.10 port 50976 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:17.511447 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:17.515836 systemd-logind[1463]: New session 10 of user core. Sep 13 01:37:17.516377 systemd[1]: Started session-10.scope. Sep 13 01:37:17.892841 sshd[3799]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:17.895882 systemd[1]: sshd@7-10.200.20.15:22-10.200.16.10:50976.service: Deactivated successfully. Sep 13 01:37:17.896091 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:37:17.896695 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:37:17.897668 systemd-logind[1463]: Removed session 10. Sep 13 01:37:22.964211 systemd[1]: Started sshd@8-10.200.20.15:22-10.200.16.10:50746.service. Sep 13 01:37:23.386269 sshd[3812]: Accepted publickey for core from 10.200.16.10 port 50746 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:23.387491 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:23.391544 systemd-logind[1463]: New session 11 of user core. Sep 13 01:37:23.392050 systemd[1]: Started session-11.scope. Sep 13 01:37:23.788799 sshd[3812]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:23.791433 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:37:23.792114 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:37:23.792229 systemd[1]: sshd@8-10.200.20.15:22-10.200.16.10:50746.service: Deactivated successfully. Sep 13 01:37:23.793407 systemd-logind[1463]: Removed session 11. Sep 13 01:37:23.858919 systemd[1]: Started sshd@9-10.200.20.15:22-10.200.16.10:50756.service. Sep 13 01:37:24.278463 sshd[3826]: Accepted publickey for core from 10.200.16.10 port 50756 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:24.280086 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:24.284579 systemd[1]: Started session-12.scope. Sep 13 01:37:24.285142 systemd-logind[1463]: New session 12 of user core. Sep 13 01:37:24.688753 sshd[3826]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:24.691663 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:37:24.692514 systemd[1]: sshd@9-10.200.20.15:22-10.200.16.10:50756.service: Deactivated successfully. Sep 13 01:37:24.693201 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:37:24.693930 systemd-logind[1463]: Removed session 12. Sep 13 01:37:24.779198 systemd[1]: Started sshd@10-10.200.20.15:22-10.200.16.10:50758.service. Sep 13 01:37:25.204343 sshd[3836]: Accepted publickey for core from 10.200.16.10 port 50758 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:25.205677 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:25.210194 systemd[1]: Started session-13.scope. Sep 13 01:37:25.210805 systemd-logind[1463]: New session 13 of user core. Sep 13 01:37:25.604566 sshd[3836]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:25.607242 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:37:25.608050 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:37:25.608179 systemd[1]: sshd@10-10.200.20.15:22-10.200.16.10:50758.service: Deactivated successfully. Sep 13 01:37:25.609330 systemd-logind[1463]: Removed session 13. Sep 13 01:37:30.676869 systemd[1]: Started sshd@11-10.200.20.15:22-10.200.16.10:60772.service. Sep 13 01:37:31.098333 sshd[3848]: Accepted publickey for core from 10.200.16.10 port 60772 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:31.099957 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:31.104224 systemd[1]: Started session-14.scope. Sep 13 01:37:31.104547 systemd-logind[1463]: New session 14 of user core. Sep 13 01:37:31.486917 sshd[3848]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:31.489225 systemd[1]: sshd@11-10.200.20.15:22-10.200.16.10:60772.service: Deactivated successfully. Sep 13 01:37:31.489988 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:37:31.490585 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:37:31.491573 systemd-logind[1463]: Removed session 14. Sep 13 01:37:36.557633 systemd[1]: Started sshd@12-10.200.20.15:22-10.200.16.10:60774.service. Sep 13 01:37:36.977614 sshd[3860]: Accepted publickey for core from 10.200.16.10 port 60774 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:36.979307 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:36.983116 systemd-logind[1463]: New session 15 of user core. Sep 13 01:37:36.983669 systemd[1]: Started session-15.scope. Sep 13 01:37:37.355100 sshd[3860]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:37.357937 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:37:37.358627 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:37:37.358781 systemd[1]: sshd@12-10.200.20.15:22-10.200.16.10:60774.service: Deactivated successfully. Sep 13 01:37:37.360016 systemd-logind[1463]: Removed session 15. Sep 13 01:37:37.426887 systemd[1]: Started sshd@13-10.200.20.15:22-10.200.16.10:60784.service. Sep 13 01:37:37.849372 sshd[3871]: Accepted publickey for core from 10.200.16.10 port 60784 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:37.850657 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:37.854618 systemd-logind[1463]: New session 16 of user core. Sep 13 01:37:37.855079 systemd[1]: Started session-16.scope. Sep 13 01:37:38.266130 sshd[3871]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:38.268998 systemd[1]: sshd@13-10.200.20.15:22-10.200.16.10:60784.service: Deactivated successfully. Sep 13 01:37:38.269756 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:37:38.270843 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:37:38.271664 systemd-logind[1463]: Removed session 16. Sep 13 01:37:38.336970 systemd[1]: Started sshd@14-10.200.20.15:22-10.200.16.10:60800.service. Sep 13 01:37:38.759582 sshd[3883]: Accepted publickey for core from 10.200.16.10 port 60800 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:38.761272 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:38.765580 systemd[1]: Started session-17.scope. Sep 13 01:37:38.766311 systemd-logind[1463]: New session 17 of user core. Sep 13 01:37:40.244480 sshd[3883]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:40.247381 systemd[1]: sshd@14-10.200.20.15:22-10.200.16.10:60800.service: Deactivated successfully. Sep 13 01:37:40.248116 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:37:40.248688 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:37:40.249681 systemd-logind[1463]: Removed session 17. Sep 13 01:37:40.314580 systemd[1]: Started sshd@15-10.200.20.15:22-10.200.16.10:56216.service. Sep 13 01:37:40.736094 sshd[3900]: Accepted publickey for core from 10.200.16.10 port 56216 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:40.737453 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:40.741826 systemd-logind[1463]: New session 18 of user core. Sep 13 01:37:40.742219 systemd[1]: Started session-18.scope. Sep 13 01:37:41.219777 sshd[3900]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:41.222619 systemd[1]: sshd@15-10.200.20.15:22-10.200.16.10:56216.service: Deactivated successfully. Sep 13 01:37:41.224140 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:37:41.224956 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:37:41.225861 systemd-logind[1463]: Removed session 18. Sep 13 01:37:41.300804 systemd[1]: Started sshd@16-10.200.20.15:22-10.200.16.10:56228.service. Sep 13 01:37:41.748418 sshd[3910]: Accepted publickey for core from 10.200.16.10 port 56228 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:41.749759 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:41.754235 systemd[1]: Started session-19.scope. Sep 13 01:37:41.755455 systemd-logind[1463]: New session 19 of user core. Sep 13 01:37:42.142203 sshd[3910]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:42.145273 systemd[1]: sshd@16-10.200.20.15:22-10.200.16.10:56228.service: Deactivated successfully. Sep 13 01:37:42.146027 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:37:42.146613 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:37:42.147520 systemd-logind[1463]: Removed session 19. Sep 13 01:37:47.207130 systemd[1]: Started sshd@17-10.200.20.15:22-10.200.16.10:56230.service. Sep 13 01:37:47.617282 sshd[3925]: Accepted publickey for core from 10.200.16.10 port 56230 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:47.618876 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:47.623225 systemd[1]: Started session-20.scope. Sep 13 01:37:47.623686 systemd-logind[1463]: New session 20 of user core. Sep 13 01:37:47.975853 sshd[3925]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:47.978707 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:37:47.978716 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:37:47.979408 systemd[1]: sshd@17-10.200.20.15:22-10.200.16.10:56230.service: Deactivated successfully. Sep 13 01:37:47.980570 systemd-logind[1463]: Removed session 20. Sep 13 01:37:53.049317 systemd[1]: Started sshd@18-10.200.20.15:22-10.200.16.10:33038.service. Sep 13 01:37:53.473844 sshd[3937]: Accepted publickey for core from 10.200.16.10 port 33038 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:53.475473 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:53.479785 systemd[1]: Started session-21.scope. Sep 13 01:37:53.480210 systemd-logind[1463]: New session 21 of user core. Sep 13 01:37:53.855741 sshd[3937]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:53.858116 systemd[1]: sshd@18-10.200.20.15:22-10.200.16.10:33038.service: Deactivated successfully. Sep 13 01:37:53.858881 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:37:53.859618 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:37:53.860436 systemd-logind[1463]: Removed session 21. Sep 13 01:37:58.925030 systemd[1]: Started sshd@19-10.200.20.15:22-10.200.16.10:33052.service. Sep 13 01:37:59.349179 sshd[3950]: Accepted publickey for core from 10.200.16.10 port 33052 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:59.350777 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:59.355106 systemd[1]: Started session-22.scope. Sep 13 01:37:59.355448 systemd-logind[1463]: New session 22 of user core. Sep 13 01:37:59.736769 sshd[3950]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:59.739939 systemd[1]: sshd@19-10.200.20.15:22-10.200.16.10:33052.service: Deactivated successfully. Sep 13 01:37:59.740127 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:37:59.740683 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:37:59.741409 systemd-logind[1463]: Removed session 22. Sep 13 01:37:59.807443 systemd[1]: Started sshd@20-10.200.20.15:22-10.200.16.10:33062.service. Sep 13 01:38:00.230176 sshd[3961]: Accepted publickey for core from 10.200.16.10 port 33062 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:00.231071 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:00.235457 systemd[1]: Started session-23.scope. Sep 13 01:38:00.235797 systemd-logind[1463]: New session 23 of user core. Sep 13 01:38:02.174236 env[1475]: time="2025-09-13T01:38:02.174171138Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:38:02.174576 env[1475]: time="2025-09-13T01:38:02.174492738Z" level=info msg="StopContainer for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" with timeout 30 (s)" Sep 13 01:38:02.174861 env[1475]: time="2025-09-13T01:38:02.174834817Z" level=info msg="Stop container \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" with signal terminated" Sep 13 01:38:02.182566 env[1475]: time="2025-09-13T01:38:02.182525369Z" level=info msg="StopContainer for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" with timeout 2 (s)" Sep 13 01:38:02.182954 env[1475]: time="2025-09-13T01:38:02.182923329Z" level=info msg="Stop container \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" with signal terminated" Sep 13 01:38:02.185558 systemd[1]: cri-containerd-265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde.scope: Deactivated successfully. Sep 13 01:38:02.191454 systemd-networkd[1638]: lxc_health: Link DOWN Sep 13 01:38:02.191460 systemd-networkd[1638]: lxc_health: Lost carrier Sep 13 01:38:02.216313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde-rootfs.mount: Deactivated successfully. Sep 13 01:38:02.224633 systemd[1]: cri-containerd-b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87.scope: Deactivated successfully. Sep 13 01:38:02.224972 systemd[1]: cri-containerd-b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87.scope: Consumed 6.163s CPU time. Sep 13 01:38:02.249167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87-rootfs.mount: Deactivated successfully. Sep 13 01:38:02.273976 env[1475]: time="2025-09-13T01:38:02.273923915Z" level=info msg="shim disconnected" id=265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde Sep 13 01:38:02.273976 env[1475]: time="2025-09-13T01:38:02.273973555Z" level=warning msg="cleaning up after shim disconnected" id=265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde namespace=k8s.io Sep 13 01:38:02.273976 env[1475]: time="2025-09-13T01:38:02.273984315Z" level=info msg="cleaning up dead shim" Sep 13 01:38:02.274666 env[1475]: time="2025-09-13T01:38:02.274637275Z" level=info msg="shim disconnected" id=b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87 Sep 13 01:38:02.274840 env[1475]: time="2025-09-13T01:38:02.274808074Z" level=warning msg="cleaning up after shim disconnected" id=b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87 namespace=k8s.io Sep 13 01:38:02.274840 env[1475]: time="2025-09-13T01:38:02.274834114Z" level=info msg="cleaning up dead shim" Sep 13 01:38:02.282094 env[1475]: time="2025-09-13T01:38:02.282054507Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4026 runtime=io.containerd.runc.v2\n" Sep 13 01:38:02.282717 env[1475]: time="2025-09-13T01:38:02.282683426Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4027 runtime=io.containerd.runc.v2\n" Sep 13 01:38:02.290153 env[1475]: time="2025-09-13T01:38:02.290121939Z" level=info msg="StopContainer for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" returns successfully" Sep 13 01:38:02.291932 env[1475]: time="2025-09-13T01:38:02.291897617Z" level=info msg="StopContainer for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" returns successfully" Sep 13 01:38:02.292474 env[1475]: time="2025-09-13T01:38:02.292448056Z" level=info msg="StopPodSandbox for \"4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334\"" Sep 13 01:38:02.292631 env[1475]: time="2025-09-13T01:38:02.292609376Z" level=info msg="Container to stop \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:02.292841 env[1475]: time="2025-09-13T01:38:02.292453456Z" level=info msg="StopPodSandbox for \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\"" Sep 13 01:38:02.292841 env[1475]: time="2025-09-13T01:38:02.292766936Z" level=info msg="Container to stop \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:02.292841 env[1475]: time="2025-09-13T01:38:02.292783816Z" level=info msg="Container to stop \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:02.292841 env[1475]: time="2025-09-13T01:38:02.292795216Z" level=info msg="Container to stop \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:02.292841 env[1475]: time="2025-09-13T01:38:02.292806496Z" level=info msg="Container to stop \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:02.292841 env[1475]: time="2025-09-13T01:38:02.292816696Z" level=info msg="Container to stop \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:02.294835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334-shm.mount: Deactivated successfully. Sep 13 01:38:02.294932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446-shm.mount: Deactivated successfully. Sep 13 01:38:02.308419 systemd[1]: cri-containerd-284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446.scope: Deactivated successfully. Sep 13 01:38:02.321708 systemd[1]: cri-containerd-4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334.scope: Deactivated successfully. Sep 13 01:38:02.348260 env[1475]: time="2025-09-13T01:38:02.347466760Z" level=info msg="shim disconnected" id=284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446 Sep 13 01:38:02.348260 env[1475]: time="2025-09-13T01:38:02.347523640Z" level=warning msg="cleaning up after shim disconnected" id=284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446 namespace=k8s.io Sep 13 01:38:02.348260 env[1475]: time="2025-09-13T01:38:02.347533920Z" level=info msg="cleaning up dead shim" Sep 13 01:38:02.348492 env[1475]: time="2025-09-13T01:38:02.348292199Z" level=info msg="shim disconnected" id=4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334 Sep 13 01:38:02.348492 env[1475]: time="2025-09-13T01:38:02.348325919Z" level=warning msg="cleaning up after shim disconnected" id=4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334 namespace=k8s.io Sep 13 01:38:02.348492 env[1475]: time="2025-09-13T01:38:02.348334239Z" level=info msg="cleaning up dead shim" Sep 13 01:38:02.358318 env[1475]: time="2025-09-13T01:38:02.358237788Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4092 runtime=io.containerd.runc.v2\n" Sep 13 01:38:02.358617 env[1475]: time="2025-09-13T01:38:02.358586628Z" level=info msg="TearDown network for sandbox \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" successfully" Sep 13 01:38:02.358668 env[1475]: time="2025-09-13T01:38:02.358613948Z" level=info msg="StopPodSandbox for \"284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446\" returns successfully" Sep 13 01:38:02.362594 env[1475]: time="2025-09-13T01:38:02.362514664Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4093 runtime=io.containerd.runc.v2\n" Sep 13 01:38:02.362817 env[1475]: time="2025-09-13T01:38:02.362784304Z" level=info msg="TearDown network for sandbox \"4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334\" successfully" Sep 13 01:38:02.362817 env[1475]: time="2025-09-13T01:38:02.362812824Z" level=info msg="StopPodSandbox for \"4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334\" returns successfully" Sep 13 01:38:02.402431 kubelet[2420]: E0913 01:38:02.402386 2420 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:02.403987 kubelet[2420]: I0913 01:38:02.403968 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7dc7d788-7440-48cf-94c8-a67c51714ad2-cilium-config-path\") pod \"7dc7d788-7440-48cf-94c8-a67c51714ad2\" (UID: \"7dc7d788-7440-48cf-94c8-a67c51714ad2\") " Sep 13 01:38:02.404970 kubelet[2420]: I0913 01:38:02.404953 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cni-path\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.405181 kubelet[2420]: I0913 01:38:02.405167 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-xtables-lock\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.405388 kubelet[2420]: I0913 01:38:02.405374 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-net\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.405602 kubelet[2420]: I0913 01:38:02.405591 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h22sg\" (UniqueName: \"kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-kube-api-access-h22sg\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.405978 kubelet[2420]: I0913 01:38:02.405949 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-cgroup\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406050 kubelet[2420]: I0913 01:38:02.405982 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwxkh\" (UniqueName: \"kubernetes.io/projected/7dc7d788-7440-48cf-94c8-a67c51714ad2-kube-api-access-rwxkh\") pod \"7dc7d788-7440-48cf-94c8-a67c51714ad2\" (UID: \"7dc7d788-7440-48cf-94c8-a67c51714ad2\") " Sep 13 01:38:02.406050 kubelet[2420]: I0913 01:38:02.406001 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-bpf-maps\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406050 kubelet[2420]: I0913 01:38:02.406020 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-lib-modules\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406050 kubelet[2420]: I0913 01:38:02.406038 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-config-path\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406146 kubelet[2420]: I0913 01:38:02.406074 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-hostproc\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406146 kubelet[2420]: I0913 01:38:02.406109 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-hubble-tls\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406146 kubelet[2420]: I0913 01:38:02.406129 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-kernel\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406146 kubelet[2420]: I0913 01:38:02.406143 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-run\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406239 kubelet[2420]: I0913 01:38:02.406158 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-etc-cni-netd\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.406239 kubelet[2420]: I0913 01:38:02.406176 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44e9af50-5481-40e7-b2ea-2d436e49614b-clustermesh-secrets\") pod \"44e9af50-5481-40e7-b2ea-2d436e49614b\" (UID: \"44e9af50-5481-40e7-b2ea-2d436e49614b\") " Sep 13 01:38:02.407464 kubelet[2420]: I0913 01:38:02.405089 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cni-path" (OuterVolumeSpecName: "cni-path") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.409875 kubelet[2420]: I0913 01:38:02.405286 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.410903 kubelet[2420]: I0913 01:38:02.405557 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.410998 kubelet[2420]: I0913 01:38:02.408384 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-hostproc" (OuterVolumeSpecName: "hostproc") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411052 kubelet[2420]: I0913 01:38:02.409522 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:38:02.411118 kubelet[2420]: I0913 01:38:02.409547 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411171 kubelet[2420]: I0913 01:38:02.409803 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411225 kubelet[2420]: I0913 01:38:02.409821 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411454 kubelet[2420]: I0913 01:38:02.409834 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411538 kubelet[2420]: I0913 01:38:02.410110 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411597 kubelet[2420]: I0913 01:38:02.410124 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:02.411651 kubelet[2420]: I0913 01:38:02.410195 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-kube-api-access-h22sg" (OuterVolumeSpecName: "kube-api-access-h22sg") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "kube-api-access-h22sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:02.411712 kubelet[2420]: I0913 01:38:02.410781 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc7d788-7440-48cf-94c8-a67c51714ad2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7dc7d788-7440-48cf-94c8-a67c51714ad2" (UID: "7dc7d788-7440-48cf-94c8-a67c51714ad2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:38:02.413606 kubelet[2420]: I0913 01:38:02.413570 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:02.413684 kubelet[2420]: I0913 01:38:02.413666 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44e9af50-5481-40e7-b2ea-2d436e49614b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44e9af50-5481-40e7-b2ea-2d436e49614b" (UID: "44e9af50-5481-40e7-b2ea-2d436e49614b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:38:02.414241 kubelet[2420]: I0913 01:38:02.414220 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc7d788-7440-48cf-94c8-a67c51714ad2-kube-api-access-rwxkh" (OuterVolumeSpecName: "kube-api-access-rwxkh") pod "7dc7d788-7440-48cf-94c8-a67c51714ad2" (UID: "7dc7d788-7440-48cf-94c8-a67c51714ad2"). InnerVolumeSpecName "kube-api-access-rwxkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:02.507282 kubelet[2420]: I0913 01:38:02.507237 2420 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-bpf-maps\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507474 kubelet[2420]: I0913 01:38:02.507458 2420 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-lib-modules\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507541 kubelet[2420]: I0913 01:38:02.507530 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-config-path\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507608 kubelet[2420]: I0913 01:38:02.507598 2420 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-hostproc\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507722 kubelet[2420]: I0913 01:38:02.507710 2420 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-hubble-tls\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507790 kubelet[2420]: I0913 01:38:02.507780 2420 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507849 kubelet[2420]: I0913 01:38:02.507838 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-run\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507908 kubelet[2420]: I0913 01:38:02.507898 2420 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-etc-cni-netd\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.507966 kubelet[2420]: I0913 01:38:02.507956 2420 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44e9af50-5481-40e7-b2ea-2d436e49614b-clustermesh-secrets\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508029 kubelet[2420]: I0913 01:38:02.508019 2420 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cni-path\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508090 kubelet[2420]: I0913 01:38:02.508080 2420 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-xtables-lock\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508149 kubelet[2420]: I0913 01:38:02.508139 2420 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-host-proc-sys-net\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508209 kubelet[2420]: I0913 01:38:02.508198 2420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h22sg\" (UniqueName: \"kubernetes.io/projected/44e9af50-5481-40e7-b2ea-2d436e49614b-kube-api-access-h22sg\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508290 kubelet[2420]: I0913 01:38:02.508280 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7dc7d788-7440-48cf-94c8-a67c51714ad2-cilium-config-path\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508360 kubelet[2420]: I0913 01:38:02.508350 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44e9af50-5481-40e7-b2ea-2d436e49614b-cilium-cgroup\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.508418 kubelet[2420]: I0913 01:38:02.508406 2420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwxkh\" (UniqueName: \"kubernetes.io/projected/7dc7d788-7440-48cf-94c8-a67c51714ad2-kube-api-access-rwxkh\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:02.658188 kubelet[2420]: I0913 01:38:02.658160 2420 scope.go:117] "RemoveContainer" containerID="265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde" Sep 13 01:38:02.660436 env[1475]: time="2025-09-13T01:38:02.660066958Z" level=info msg="RemoveContainer for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\"" Sep 13 01:38:02.665682 systemd[1]: Removed slice kubepods-besteffort-pod7dc7d788_7440_48cf_94c8_a67c51714ad2.slice. Sep 13 01:38:02.671492 systemd[1]: Removed slice kubepods-burstable-pod44e9af50_5481_40e7_b2ea_2d436e49614b.slice. Sep 13 01:38:02.671579 systemd[1]: kubepods-burstable-pod44e9af50_5481_40e7_b2ea_2d436e49614b.slice: Consumed 6.253s CPU time. Sep 13 01:38:02.690070 env[1475]: time="2025-09-13T01:38:02.690034127Z" level=info msg="RemoveContainer for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" returns successfully" Sep 13 01:38:02.690497 kubelet[2420]: I0913 01:38:02.690480 2420 scope.go:117] "RemoveContainer" containerID="265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde" Sep 13 01:38:02.690891 env[1475]: time="2025-09-13T01:38:02.690832646Z" level=error msg="ContainerStatus for \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\": not found" Sep 13 01:38:02.691101 kubelet[2420]: E0913 01:38:02.691081 2420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\": not found" containerID="265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde" Sep 13 01:38:02.691283 kubelet[2420]: I0913 01:38:02.691190 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde"} err="failed to get container status \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\": rpc error: code = NotFound desc = an error occurred when try to find container \"265a169b9b445d25c4467bef13865e4e60b0cf97601806cf2ecc24bb86920dde\": not found" Sep 13 01:38:02.691350 kubelet[2420]: I0913 01:38:02.691339 2420 scope.go:117] "RemoveContainer" containerID="b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87" Sep 13 01:38:02.693595 env[1475]: time="2025-09-13T01:38:02.693558523Z" level=info msg="RemoveContainer for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\"" Sep 13 01:38:02.706998 env[1475]: time="2025-09-13T01:38:02.706960510Z" level=info msg="RemoveContainer for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" returns successfully" Sep 13 01:38:02.707358 kubelet[2420]: I0913 01:38:02.707339 2420 scope.go:117] "RemoveContainer" containerID="aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128" Sep 13 01:38:02.708531 env[1475]: time="2025-09-13T01:38:02.708506868Z" level=info msg="RemoveContainer for \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\"" Sep 13 01:38:02.720064 env[1475]: time="2025-09-13T01:38:02.720030096Z" level=info msg="RemoveContainer for \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\" returns successfully" Sep 13 01:38:02.720385 kubelet[2420]: I0913 01:38:02.720351 2420 scope.go:117] "RemoveContainer" containerID="a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5" Sep 13 01:38:02.721436 env[1475]: time="2025-09-13T01:38:02.721412015Z" level=info msg="RemoveContainer for \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\"" Sep 13 01:38:02.730847 env[1475]: time="2025-09-13T01:38:02.730819245Z" level=info msg="RemoveContainer for \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\" returns successfully" Sep 13 01:38:02.731164 kubelet[2420]: I0913 01:38:02.731139 2420 scope.go:117] "RemoveContainer" containerID="fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359" Sep 13 01:38:02.732112 env[1475]: time="2025-09-13T01:38:02.732089804Z" level=info msg="RemoveContainer for \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\"" Sep 13 01:38:02.741839 env[1475]: time="2025-09-13T01:38:02.741811354Z" level=info msg="RemoveContainer for \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\" returns successfully" Sep 13 01:38:02.742100 kubelet[2420]: I0913 01:38:02.742080 2420 scope.go:117] "RemoveContainer" containerID="a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d" Sep 13 01:38:02.743075 env[1475]: time="2025-09-13T01:38:02.743048832Z" level=info msg="RemoveContainer for \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\"" Sep 13 01:38:02.755515 env[1475]: time="2025-09-13T01:38:02.755478380Z" level=info msg="RemoveContainer for \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\" returns successfully" Sep 13 01:38:02.755692 kubelet[2420]: I0913 01:38:02.755667 2420 scope.go:117] "RemoveContainer" containerID="b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87" Sep 13 01:38:02.755942 env[1475]: time="2025-09-13T01:38:02.755889619Z" level=error msg="ContainerStatus for \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\": not found" Sep 13 01:38:02.756169 kubelet[2420]: E0913 01:38:02.756136 2420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\": not found" containerID="b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87" Sep 13 01:38:02.756300 kubelet[2420]: I0913 01:38:02.756277 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87"} err="failed to get container status \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5a44e2087676f3a9d13e953b27c25552419b2a97bf197d01b768cd549352e87\": not found" Sep 13 01:38:02.756388 kubelet[2420]: I0913 01:38:02.756375 2420 scope.go:117] "RemoveContainer" containerID="aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128" Sep 13 01:38:02.756656 env[1475]: time="2025-09-13T01:38:02.756612938Z" level=error msg="ContainerStatus for \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\": not found" Sep 13 01:38:02.756853 kubelet[2420]: E0913 01:38:02.756829 2420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\": not found" containerID="aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128" Sep 13 01:38:02.756925 kubelet[2420]: I0913 01:38:02.756879 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128"} err="failed to get container status \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa8d65e58cc5215e5d999e09eb757f4d55a722fe26ca6d647c9c0742bb682128\": not found" Sep 13 01:38:02.756925 kubelet[2420]: I0913 01:38:02.756899 2420 scope.go:117] "RemoveContainer" containerID="a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5" Sep 13 01:38:02.757144 env[1475]: time="2025-09-13T01:38:02.757102058Z" level=error msg="ContainerStatus for \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\": not found" Sep 13 01:38:02.757371 kubelet[2420]: E0913 01:38:02.757300 2420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\": not found" containerID="a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5" Sep 13 01:38:02.757371 kubelet[2420]: I0913 01:38:02.757321 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5"} err="failed to get container status \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3a56dedb4408ed1fa3a6a57108f716fb8cc89728d96ddb51ce41ef1337e92b5\": not found" Sep 13 01:38:02.757371 kubelet[2420]: I0913 01:38:02.757335 2420 scope.go:117] "RemoveContainer" containerID="fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359" Sep 13 01:38:02.759494 env[1475]: time="2025-09-13T01:38:02.759448576Z" level=error msg="ContainerStatus for \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\": not found" Sep 13 01:38:02.759721 kubelet[2420]: E0913 01:38:02.759696 2420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\": not found" containerID="fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359" Sep 13 01:38:02.759791 kubelet[2420]: I0913 01:38:02.759722 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359"} err="failed to get container status \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd860656fc3f14d11819e9e946ad52189f3a76e4e4a52132e3a907b1e897b359\": not found" Sep 13 01:38:02.759791 kubelet[2420]: I0913 01:38:02.759740 2420 scope.go:117] "RemoveContainer" containerID="a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d" Sep 13 01:38:02.759989 env[1475]: time="2025-09-13T01:38:02.759949655Z" level=error msg="ContainerStatus for \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\": not found" Sep 13 01:38:02.760196 kubelet[2420]: E0913 01:38:02.760169 2420 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\": not found" containerID="a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d" Sep 13 01:38:02.760315 kubelet[2420]: I0913 01:38:02.760297 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d"} err="failed to get container status \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a27ea97712e7bf3e9c38921a8c72fead643d7afe72455feeeaa7d5a1726eca7d\": not found" Sep 13 01:38:03.153372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4843febb32d6cef0c25594dd249b636d06477a3cff9850e46b93d2c643370334-rootfs.mount: Deactivated successfully. Sep 13 01:38:03.153469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-284acc90287bf4833f926d02bb119cc633b3aef796db547531b142b1784b0446-rootfs.mount: Deactivated successfully. Sep 13 01:38:03.153529 systemd[1]: var-lib-kubelet-pods-7dc7d788\x2d7440\x2d48cf\x2d94c8\x2da67c51714ad2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drwxkh.mount: Deactivated successfully. Sep 13 01:38:03.153584 systemd[1]: var-lib-kubelet-pods-44e9af50\x2d5481\x2d40e7\x2db2ea\x2d2d436e49614b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh22sg.mount: Deactivated successfully. Sep 13 01:38:03.153637 systemd[1]: var-lib-kubelet-pods-44e9af50\x2d5481\x2d40e7\x2db2ea\x2d2d436e49614b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:03.153693 systemd[1]: var-lib-kubelet-pods-44e9af50\x2d5481\x2d40e7\x2db2ea\x2d2d436e49614b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:04.184694 sshd[3961]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:04.187661 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:38:04.187824 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:38:04.188001 systemd[1]: session-23.scope: Consumed 1.042s CPU time. Sep 13 01:38:04.188680 systemd[1]: sshd@20-10.200.20.15:22-10.200.16.10:33062.service: Deactivated successfully. Sep 13 01:38:04.189515 systemd-logind[1463]: Removed session 23. Sep 13 01:38:04.255492 systemd[1]: Started sshd@21-10.200.20.15:22-10.200.16.10:58082.service. Sep 13 01:38:04.292689 kubelet[2420]: I0913 01:38:04.292641 2420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" path="/var/lib/kubelet/pods/44e9af50-5481-40e7-b2ea-2d436e49614b/volumes" Sep 13 01:38:04.293208 kubelet[2420]: I0913 01:38:04.293186 2420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc7d788-7440-48cf-94c8-a67c51714ad2" path="/var/lib/kubelet/pods/7dc7d788-7440-48cf-94c8-a67c51714ad2/volumes" Sep 13 01:38:04.678365 sshd[4126]: Accepted publickey for core from 10.200.16.10 port 58082 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:04.679950 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:04.683933 systemd-logind[1463]: New session 24 of user core. Sep 13 01:38:04.684437 systemd[1]: Started session-24.scope. Sep 13 01:38:05.851351 kubelet[2420]: I0913 01:38:05.851294 2420 setters.go:600] "Node became not ready" node="ci-3510.3.8-n-49eff79a60" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T01:38:05Z","lastTransitionTime":"2025-09-13T01:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 01:38:05.959620 kubelet[2420]: E0913 01:38:05.959574 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" containerName="apply-sysctl-overwrites" Sep 13 01:38:05.959620 kubelet[2420]: E0913 01:38:05.959655 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7dc7d788-7440-48cf-94c8-a67c51714ad2" containerName="cilium-operator" Sep 13 01:38:05.959620 kubelet[2420]: E0913 01:38:05.959664 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" containerName="cilium-agent" Sep 13 01:38:05.959858 kubelet[2420]: E0913 01:38:05.959670 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" containerName="mount-cgroup" Sep 13 01:38:05.959858 kubelet[2420]: E0913 01:38:05.959676 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" containerName="clean-cilium-state" Sep 13 01:38:05.959858 kubelet[2420]: E0913 01:38:05.959682 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" containerName="mount-bpf-fs" Sep 13 01:38:05.959858 kubelet[2420]: I0913 01:38:05.959704 2420 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e9af50-5481-40e7-b2ea-2d436e49614b" containerName="cilium-agent" Sep 13 01:38:05.959858 kubelet[2420]: I0913 01:38:05.959710 2420 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc7d788-7440-48cf-94c8-a67c51714ad2" containerName="cilium-operator" Sep 13 01:38:05.964877 systemd[1]: Created slice kubepods-burstable-podc00050e0_dc30_4233_acff_90c9d7f2c9d8.slice. Sep 13 01:38:06.013461 sshd[4126]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:06.016304 systemd[1]: sshd@21-10.200.20.15:22-10.200.16.10:58082.service: Deactivated successfully. Sep 13 01:38:06.017045 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:38:06.018008 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:38:06.018818 systemd-logind[1463]: Removed session 24. Sep 13 01:38:06.025804 kubelet[2420]: I0913 01:38:06.025604 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hostproc\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.025804 kubelet[2420]: I0913 01:38:06.025659 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-xtables-lock\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.025804 kubelet[2420]: I0913 01:38:06.025686 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-lib-modules\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.025804 kubelet[2420]: I0913 01:38:06.025705 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-run\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.025804 kubelet[2420]: I0913 01:38:06.025726 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-net\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.025804 kubelet[2420]: I0913 01:38:06.025752 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-kernel\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026046 kubelet[2420]: I0913 01:38:06.025772 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-clustermesh-secrets\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026046 kubelet[2420]: I0913 01:38:06.025791 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-ipsec-secrets\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026046 kubelet[2420]: I0913 01:38:06.025809 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hubble-tls\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026046 kubelet[2420]: I0913 01:38:06.025825 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-bpf-maps\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026046 kubelet[2420]: I0913 01:38:06.025843 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-etc-cni-netd\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026046 kubelet[2420]: I0913 01:38:06.025861 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-config-path\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026209 kubelet[2420]: I0913 01:38:06.025880 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-kube-api-access-t4blb\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026209 kubelet[2420]: I0913 01:38:06.025898 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-cgroup\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.026209 kubelet[2420]: I0913 01:38:06.025915 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cni-path\") pod \"cilium-v2j69\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " pod="kube-system/cilium-v2j69" Sep 13 01:38:06.084066 systemd[1]: Started sshd@22-10.200.20.15:22-10.200.16.10:58086.service. Sep 13 01:38:06.269832 env[1475]: time="2025-09-13T01:38:06.269396018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2j69,Uid:c00050e0-dc30-4233-acff-90c9d7f2c9d8,Namespace:kube-system,Attempt:0,}" Sep 13 01:38:06.310496 env[1475]: time="2025-09-13T01:38:06.310382818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:38:06.310835 env[1475]: time="2025-09-13T01:38:06.310688778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:38:06.310835 env[1475]: time="2025-09-13T01:38:06.310709938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:38:06.311051 env[1475]: time="2025-09-13T01:38:06.310978417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04 pid=4151 runtime=io.containerd.runc.v2 Sep 13 01:38:06.321169 systemd[1]: Started cri-containerd-bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04.scope. Sep 13 01:38:06.344421 env[1475]: time="2025-09-13T01:38:06.344384625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2j69,Uid:c00050e0-dc30-4233-acff-90c9d7f2c9d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\"" Sep 13 01:38:06.350210 env[1475]: time="2025-09-13T01:38:06.350171819Z" level=info msg="CreateContainer within sandbox \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:38:06.389234 env[1475]: time="2025-09-13T01:38:06.389176501Z" level=info msg="CreateContainer within sandbox \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\"" Sep 13 01:38:06.389816 env[1475]: time="2025-09-13T01:38:06.389749221Z" level=info msg="StartContainer for \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\"" Sep 13 01:38:06.405145 systemd[1]: Started cri-containerd-ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb.scope. Sep 13 01:38:06.413666 systemd[1]: cri-containerd-ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb.scope: Deactivated successfully. Sep 13 01:38:06.413913 systemd[1]: Stopped cri-containerd-ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb.scope. Sep 13 01:38:06.477437 env[1475]: time="2025-09-13T01:38:06.477389215Z" level=info msg="shim disconnected" id=ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb Sep 13 01:38:06.477674 env[1475]: time="2025-09-13T01:38:06.477657135Z" level=warning msg="cleaning up after shim disconnected" id=ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb namespace=k8s.io Sep 13 01:38:06.477751 env[1475]: time="2025-09-13T01:38:06.477738615Z" level=info msg="cleaning up dead shim" Sep 13 01:38:06.484295 env[1475]: time="2025-09-13T01:38:06.484227648Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4207 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T01:38:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 01:38:06.484777 env[1475]: time="2025-09-13T01:38:06.484683728Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Sep 13 01:38:06.485184 env[1475]: time="2025-09-13T01:38:06.484908448Z" level=error msg="Failed to pipe stdout of container \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\"" error="reading from a closed fifo" Sep 13 01:38:06.485319 env[1475]: time="2025-09-13T01:38:06.484980128Z" level=error msg="Failed to pipe stderr of container \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\"" error="reading from a closed fifo" Sep 13 01:38:06.489902 env[1475]: time="2025-09-13T01:38:06.489852323Z" level=error msg="StartContainer for \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 01:38:06.490281 kubelet[2420]: E0913 01:38:06.490164 2420 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb" Sep 13 01:38:06.490720 kubelet[2420]: E0913 01:38:06.490471 2420 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 13 01:38:06.490720 kubelet[2420]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 01:38:06.490720 kubelet[2420]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 01:38:06.490720 kubelet[2420]: rm /hostbin/cilium-mount Sep 13 01:38:06.490886 kubelet[2420]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4blb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-v2j69_kube-system(c00050e0-dc30-4233-acff-90c9d7f2c9d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 01:38:06.490886 kubelet[2420]: > logger="UnhandledError" Sep 13 01:38:06.491818 kubelet[2420]: E0913 01:38:06.491751 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-v2j69" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" Sep 13 01:38:06.507997 sshd[4137]: Accepted publickey for core from 10.200.16.10 port 58086 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:06.509339 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:06.513684 systemd[1]: Started session-25.scope. Sep 13 01:38:06.514309 systemd-logind[1463]: New session 25 of user core. Sep 13 01:38:06.677001 env[1475]: time="2025-09-13T01:38:06.676568541Z" level=info msg="CreateContainer within sandbox \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 13 01:38:06.713464 env[1475]: time="2025-09-13T01:38:06.713411505Z" level=info msg="CreateContainer within sandbox \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\"" Sep 13 01:38:06.714520 env[1475]: time="2025-09-13T01:38:06.714490424Z" level=info msg="StartContainer for \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\"" Sep 13 01:38:06.728766 systemd[1]: Started cri-containerd-d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d.scope. Sep 13 01:38:06.740864 systemd[1]: cri-containerd-d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d.scope: Deactivated successfully. Sep 13 01:38:06.769596 env[1475]: time="2025-09-13T01:38:06.769545170Z" level=info msg="shim disconnected" id=d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d Sep 13 01:38:06.769862 env[1475]: time="2025-09-13T01:38:06.769843330Z" level=warning msg="cleaning up after shim disconnected" id=d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d namespace=k8s.io Sep 13 01:38:06.769941 env[1475]: time="2025-09-13T01:38:06.769928730Z" level=info msg="cleaning up dead shim" Sep 13 01:38:06.776671 env[1475]: time="2025-09-13T01:38:06.776629123Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4252 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T01:38:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 01:38:06.777086 env[1475]: time="2025-09-13T01:38:06.777033363Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Sep 13 01:38:06.777433 env[1475]: time="2025-09-13T01:38:06.777330122Z" level=error msg="Failed to pipe stdout of container \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\"" error="reading from a closed fifo" Sep 13 01:38:06.777703 env[1475]: time="2025-09-13T01:38:06.777379122Z" level=error msg="Failed to pipe stderr of container \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\"" error="reading from a closed fifo" Sep 13 01:38:06.783259 env[1475]: time="2025-09-13T01:38:06.783205997Z" level=error msg="StartContainer for \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 01:38:06.783551 kubelet[2420]: E0913 01:38:06.783497 2420 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d" Sep 13 01:38:06.783698 kubelet[2420]: E0913 01:38:06.783659 2420 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 13 01:38:06.783698 kubelet[2420]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 01:38:06.783698 kubelet[2420]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 01:38:06.783698 kubelet[2420]: rm /hostbin/cilium-mount Sep 13 01:38:06.783698 kubelet[2420]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4blb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-v2j69_kube-system(c00050e0-dc30-4233-acff-90c9d7f2c9d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 01:38:06.783698 kubelet[2420]: > logger="UnhandledError" Sep 13 01:38:06.785031 kubelet[2420]: E0913 01:38:06.784976 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-v2j69" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" Sep 13 01:38:06.909189 sshd[4137]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:06.911940 systemd[1]: sshd@22-10.200.20.15:22-10.200.16.10:58086.service: Deactivated successfully. Sep 13 01:38:06.912693 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:38:06.913270 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:38:06.914043 systemd-logind[1463]: Removed session 25. Sep 13 01:38:06.980956 systemd[1]: Started sshd@23-10.200.20.15:22-10.200.16.10:58098.service. Sep 13 01:38:07.291040 kubelet[2420]: E0913 01:38:07.290746 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podUID="6504e50c-1684-4c0c-a591-5ed9b4788fcb" Sep 13 01:38:07.403216 kubelet[2420]: E0913 01:38:07.403181 2420 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:07.404361 sshd[4266]: Accepted publickey for core from 10.200.16.10 port 58098 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:07.405121 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:07.410307 systemd-logind[1463]: New session 26 of user core. Sep 13 01:38:07.410357 systemd[1]: Started session-26.scope. Sep 13 01:38:07.677174 kubelet[2420]: I0913 01:38:07.677082 2420 scope.go:117] "RemoveContainer" containerID="ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb" Sep 13 01:38:07.678503 env[1475]: time="2025-09-13T01:38:07.678155452Z" level=info msg="StopPodSandbox for \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\"" Sep 13 01:38:07.678920 env[1475]: time="2025-09-13T01:38:07.678512611Z" level=info msg="Container to stop \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:07.678920 env[1475]: time="2025-09-13T01:38:07.678537171Z" level=info msg="Container to stop \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:07.678920 env[1475]: time="2025-09-13T01:38:07.678674731Z" level=info msg="RemoveContainer for \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\"" Sep 13 01:38:07.685115 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04-shm.mount: Deactivated successfully. Sep 13 01:38:07.693226 systemd[1]: cri-containerd-bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04.scope: Deactivated successfully. Sep 13 01:38:07.699758 env[1475]: time="2025-09-13T01:38:07.699715871Z" level=info msg="RemoveContainer for \"ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb\" returns successfully" Sep 13 01:38:07.730310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04-rootfs.mount: Deactivated successfully. Sep 13 01:38:07.749487 env[1475]: time="2025-09-13T01:38:07.749433223Z" level=info msg="shim disconnected" id=bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04 Sep 13 01:38:07.749487 env[1475]: time="2025-09-13T01:38:07.749481743Z" level=warning msg="cleaning up after shim disconnected" id=bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04 namespace=k8s.io Sep 13 01:38:07.749487 env[1475]: time="2025-09-13T01:38:07.749492583Z" level=info msg="cleaning up dead shim" Sep 13 01:38:07.756977 env[1475]: time="2025-09-13T01:38:07.756925936Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4292 runtime=io.containerd.runc.v2\n" Sep 13 01:38:07.757320 env[1475]: time="2025-09-13T01:38:07.757289935Z" level=info msg="TearDown network for sandbox \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\" successfully" Sep 13 01:38:07.757373 env[1475]: time="2025-09-13T01:38:07.757319775Z" level=info msg="StopPodSandbox for \"bc839625ead73aa11a1e528fdbd59949757515f105f4a786752313076a72dd04\" returns successfully" Sep 13 01:38:07.837581 kubelet[2420]: I0913 01:38:07.837552 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-ipsec-secrets\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.837786 kubelet[2420]: I0913 01:38:07.837771 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-kube-api-access-t4blb\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.837858 kubelet[2420]: I0913 01:38:07.837846 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hostproc\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.837931 kubelet[2420]: I0913 01:38:07.837920 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-etc-cni-netd\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838004 kubelet[2420]: I0913 01:38:07.837992 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-net\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838073 kubelet[2420]: I0913 01:38:07.838062 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-cgroup\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838149 kubelet[2420]: I0913 01:38:07.838138 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-clustermesh-secrets\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838215 kubelet[2420]: I0913 01:38:07.838204 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cni-path\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838323 kubelet[2420]: I0913 01:38:07.838290 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.838382 kubelet[2420]: I0913 01:38:07.838299 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-kernel\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838382 kubelet[2420]: I0913 01:38:07.838347 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-run\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838382 kubelet[2420]: I0913 01:38:07.838365 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-bpf-maps\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838458 kubelet[2420]: I0913 01:38:07.838384 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-lib-modules\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838458 kubelet[2420]: I0913 01:38:07.838399 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-xtables-lock\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838458 kubelet[2420]: I0913 01:38:07.838416 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hubble-tls\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838458 kubelet[2420]: I0913 01:38:07.838434 2420 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-config-path\") pod \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\" (UID: \"c00050e0-dc30-4233-acff-90c9d7f2c9d8\") " Sep 13 01:38:07.838552 kubelet[2420]: I0913 01:38:07.838479 2420 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-etc-cni-netd\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.838613 kubelet[2420]: I0913 01:38:07.838594 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.838683 kubelet[2420]: I0913 01:38:07.838672 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.838752 kubelet[2420]: I0913 01:38:07.838740 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.840441 kubelet[2420]: I0913 01:38:07.840401 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cni-path" (OuterVolumeSpecName: "cni-path") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.840529 kubelet[2420]: I0913 01:38:07.840458 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.840707 kubelet[2420]: I0913 01:38:07.840677 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:38:07.840752 kubelet[2420]: I0913 01:38:07.840715 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.840752 kubelet[2420]: I0913 01:38:07.840730 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.842770 systemd[1]: var-lib-kubelet-pods-c00050e0\x2ddc30\x2d4233\x2dacff\x2d90c9d7f2c9d8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:07.843606 kubelet[2420]: I0913 01:38:07.843585 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hostproc" (OuterVolumeSpecName: "hostproc") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.843695 kubelet[2420]: I0913 01:38:07.843682 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 01:38:07.847111 systemd[1]: var-lib-kubelet-pods-c00050e0\x2ddc30\x2d4233\x2dacff\x2d90c9d7f2c9d8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:07.848142 kubelet[2420]: I0913 01:38:07.848100 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:38:07.848327 kubelet[2420]: I0913 01:38:07.848306 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:38:07.850228 kubelet[2420]: I0913 01:38:07.850186 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-kube-api-access-t4blb" (OuterVolumeSpecName: "kube-api-access-t4blb") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "kube-api-access-t4blb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:07.850383 kubelet[2420]: I0913 01:38:07.850364 2420 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c00050e0-dc30-4233-acff-90c9d7f2c9d8" (UID: "c00050e0-dc30-4233-acff-90c9d7f2c9d8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.939552 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940316 2420 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4blb\" (UniqueName: \"kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-kube-api-access-t4blb\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940340 2420 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hostproc\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940360 2420 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-net\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940384 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-cgroup\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940394 2420 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cni-path\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940402 2420 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940413 2420 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00050e0-dc30-4233-acff-90c9d7f2c9d8-clustermesh-secrets\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940423 2420 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-lib-modules\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940431 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-run\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940440 2420 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-bpf-maps\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940448 2420 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00050e0-dc30-4233-acff-90c9d7f2c9d8-xtables-lock\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940456 2420 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00050e0-dc30-4233-acff-90c9d7f2c9d8-hubble-tls\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:07.940511 kubelet[2420]: I0913 01:38:07.940465 2420 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00050e0-dc30-4233-acff-90c9d7f2c9d8-cilium-config-path\") on node \"ci-3510.3.8-n-49eff79a60\" DevicePath \"\"" Sep 13 01:38:08.131205 systemd[1]: var-lib-kubelet-pods-c00050e0\x2ddc30\x2d4233\x2dacff\x2d90c9d7f2c9d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt4blb.mount: Deactivated successfully. Sep 13 01:38:08.131323 systemd[1]: var-lib-kubelet-pods-c00050e0\x2ddc30\x2d4233\x2dacff\x2d90c9d7f2c9d8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:08.295888 systemd[1]: Removed slice kubepods-burstable-podc00050e0_dc30_4233_acff_90c9d7f2c9d8.slice. Sep 13 01:38:08.679473 kubelet[2420]: I0913 01:38:08.679378 2420 scope.go:117] "RemoveContainer" containerID="d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d" Sep 13 01:38:08.682418 env[1475]: time="2025-09-13T01:38:08.682131693Z" level=info msg="RemoveContainer for \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\"" Sep 13 01:38:08.692551 env[1475]: time="2025-09-13T01:38:08.692515963Z" level=info msg="RemoveContainer for \"d3e561db158bed27b7907c91e02cf6d8df64724894f4105dbd70b2dd1170193d\" returns successfully" Sep 13 01:38:08.731436 kubelet[2420]: E0913 01:38:08.731403 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" containerName="mount-cgroup" Sep 13 01:38:08.731588 kubelet[2420]: E0913 01:38:08.731577 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" containerName="mount-cgroup" Sep 13 01:38:08.731685 kubelet[2420]: I0913 01:38:08.731674 2420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" containerName="mount-cgroup" Sep 13 01:38:08.731754 kubelet[2420]: I0913 01:38:08.731745 2420 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" containerName="mount-cgroup" Sep 13 01:38:08.737127 systemd[1]: Created slice kubepods-burstable-pod47001ffb_82a9_4801_93d1_a0f6abfcf80c.slice. Sep 13 01:38:08.739446 kubelet[2420]: W0913 01:38:08.739400 2420 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.8-n-49eff79a60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object Sep 13 01:38:08.739553 kubelet[2420]: E0913 01:38:08.739455 2420 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510.3.8-n-49eff79a60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object" logger="UnhandledError" Sep 13 01:38:08.739553 kubelet[2420]: W0913 01:38:08.739400 2420 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.8-n-49eff79a60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object Sep 13 01:38:08.739553 kubelet[2420]: E0913 01:38:08.739478 2420 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.8-n-49eff79a60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object" logger="UnhandledError" Sep 13 01:38:08.739635 kubelet[2420]: W0913 01:38:08.739561 2420 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.8-n-49eff79a60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object Sep 13 01:38:08.739635 kubelet[2420]: E0913 01:38:08.739588 2420 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.8-n-49eff79a60\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object" logger="UnhandledError" Sep 13 01:38:08.739730 kubelet[2420]: W0913 01:38:08.739714 2420 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.8-n-49eff79a60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object Sep 13 01:38:08.739828 kubelet[2420]: E0913 01:38:08.739807 2420 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.8-n-49eff79a60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-49eff79a60' and this object" logger="UnhandledError" Sep 13 01:38:08.744860 kubelet[2420]: I0913 01:38:08.744830 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cilium-run\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745055 kubelet[2420]: I0913 01:38:08.745016 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-bpf-maps\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745108 kubelet[2420]: I0913 01:38:08.745075 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-etc-cni-netd\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745137 kubelet[2420]: I0913 01:38:08.745107 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-hostproc\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745137 kubelet[2420]: I0913 01:38:08.745125 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cni-path\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745190 kubelet[2420]: I0913 01:38:08.745141 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-host-proc-sys-net\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745190 kubelet[2420]: I0913 01:38:08.745159 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-host-proc-sys-kernel\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745190 kubelet[2420]: I0913 01:38:08.745173 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47001ffb-82a9-4801-93d1-a0f6abfcf80c-hubble-tls\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745273 kubelet[2420]: I0913 01:38:08.745193 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cilium-ipsec-secrets\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745273 kubelet[2420]: I0913 01:38:08.745209 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-lib-modules\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745273 kubelet[2420]: I0913 01:38:08.745223 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-xtables-lock\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745273 kubelet[2420]: I0913 01:38:08.745238 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cilium-config-path\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745273 kubelet[2420]: I0913 01:38:08.745272 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cilium-cgroup\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745392 kubelet[2420]: I0913 01:38:08.745290 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjw4l\" (UniqueName: \"kubernetes.io/projected/47001ffb-82a9-4801-93d1-a0f6abfcf80c-kube-api-access-gjw4l\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:08.745392 kubelet[2420]: I0913 01:38:08.745308 2420 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47001ffb-82a9-4801-93d1-a0f6abfcf80c-clustermesh-secrets\") pod \"cilium-m8qw5\" (UID: \"47001ffb-82a9-4801-93d1-a0f6abfcf80c\") " pod="kube-system/cilium-m8qw5" Sep 13 01:38:09.290531 kubelet[2420]: E0913 01:38:09.290480 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podUID="6504e50c-1684-4c0c-a591-5ed9b4788fcb" Sep 13 01:38:09.582161 kubelet[2420]: W0913 01:38:09.582058 2420 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc00050e0_dc30_4233_acff_90c9d7f2c9d8.slice/cri-containerd-ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb.scope WatchSource:0}: container "ff7a554c514c7b1b3b92423a145af6fe8fceeb887dd77165689ec681327128fb" in namespace "k8s.io": not found Sep 13 01:38:09.846934 kubelet[2420]: E0913 01:38:09.846822 2420 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:09.846934 kubelet[2420]: E0913 01:38:09.846856 2420 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-m8qw5: failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:09.846934 kubelet[2420]: E0913 01:38:09.846936 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47001ffb-82a9-4801-93d1-a0f6abfcf80c-hubble-tls podName:47001ffb-82a9-4801-93d1-a0f6abfcf80c nodeName:}" failed. No retries permitted until 2025-09-13 01:38:10.346904157 +0000 UTC m=+188.181438709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/47001ffb-82a9-4801-93d1-a0f6abfcf80c-hubble-tls") pod "cilium-m8qw5" (UID: "47001ffb-82a9-4801-93d1-a0f6abfcf80c") : failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:09.847349 kubelet[2420]: E0913 01:38:09.846955 2420 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:09.847349 kubelet[2420]: E0913 01:38:09.846979 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cilium-ipsec-secrets podName:47001ffb-82a9-4801-93d1-a0f6abfcf80c nodeName:}" failed. No retries permitted until 2025-09-13 01:38:10.346972836 +0000 UTC m=+188.181507428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/47001ffb-82a9-4801-93d1-a0f6abfcf80c-cilium-ipsec-secrets") pod "cilium-m8qw5" (UID: "47001ffb-82a9-4801-93d1-a0f6abfcf80c") : failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:09.847349 kubelet[2420]: E0913 01:38:09.847214 2420 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:09.847349 kubelet[2420]: E0913 01:38:09.847273 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47001ffb-82a9-4801-93d1-a0f6abfcf80c-clustermesh-secrets podName:47001ffb-82a9-4801-93d1-a0f6abfcf80c nodeName:}" failed. No retries permitted until 2025-09-13 01:38:10.347236236 +0000 UTC m=+188.181770828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/47001ffb-82a9-4801-93d1-a0f6abfcf80c-clustermesh-secrets") pod "cilium-m8qw5" (UID: "47001ffb-82a9-4801-93d1-a0f6abfcf80c") : failed to sync secret cache: timed out waiting for the condition Sep 13 01:38:10.292677 kubelet[2420]: I0913 01:38:10.292638 2420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00050e0-dc30-4233-acff-90c9d7f2c9d8" path="/var/lib/kubelet/pods/c00050e0-dc30-4233-acff-90c9d7f2c9d8/volumes" Sep 13 01:38:10.540088 env[1475]: time="2025-09-13T01:38:10.540044273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8qw5,Uid:47001ffb-82a9-4801-93d1-a0f6abfcf80c,Namespace:kube-system,Attempt:0,}" Sep 13 01:38:10.579412 env[1475]: time="2025-09-13T01:38:10.579286556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:38:10.579579 env[1475]: time="2025-09-13T01:38:10.579554876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:38:10.579682 env[1475]: time="2025-09-13T01:38:10.579660876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:38:10.579958 env[1475]: time="2025-09-13T01:38:10.579926876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5 pid=4322 runtime=io.containerd.runc.v2 Sep 13 01:38:10.592634 systemd[1]: Started cri-containerd-20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5.scope. Sep 13 01:38:10.617887 env[1475]: time="2025-09-13T01:38:10.617834641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8qw5,Uid:47001ffb-82a9-4801-93d1-a0f6abfcf80c,Namespace:kube-system,Attempt:0,} returns sandbox id \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\"" Sep 13 01:38:10.621898 env[1475]: time="2025-09-13T01:38:10.621865397Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:38:10.666551 env[1475]: time="2025-09-13T01:38:10.666499676Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f\"" Sep 13 01:38:10.668367 env[1475]: time="2025-09-13T01:38:10.668319914Z" level=info msg="StartContainer for \"8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f\"" Sep 13 01:38:10.693176 systemd[1]: Started cri-containerd-8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f.scope. Sep 13 01:38:10.725873 systemd[1]: cri-containerd-8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f.scope: Deactivated successfully. Sep 13 01:38:10.726904 env[1475]: time="2025-09-13T01:38:10.726854660Z" level=info msg="StartContainer for \"8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f\" returns successfully" Sep 13 01:38:10.785377 env[1475]: time="2025-09-13T01:38:10.785324966Z" level=info msg="shim disconnected" id=8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f Sep 13 01:38:10.785377 env[1475]: time="2025-09-13T01:38:10.785371846Z" level=warning msg="cleaning up after shim disconnected" id=8938d0bb300ae87005e67489b49b4f9ab565189be035daef0c4dbe0dbb136c9f namespace=k8s.io Sep 13 01:38:10.785377 env[1475]: time="2025-09-13T01:38:10.785383206Z" level=info msg="cleaning up dead shim" Sep 13 01:38:10.792222 env[1475]: time="2025-09-13T01:38:10.792178519Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4403 runtime=io.containerd.runc.v2\n" Sep 13 01:38:11.290221 kubelet[2420]: E0913 01:38:11.290172 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podUID="6504e50c-1684-4c0c-a591-5ed9b4788fcb" Sep 13 01:38:11.692107 env[1475]: time="2025-09-13T01:38:11.692061574Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:38:11.737464 env[1475]: time="2025-09-13T01:38:11.737418413Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3\"" Sep 13 01:38:11.738199 env[1475]: time="2025-09-13T01:38:11.738174172Z" level=info msg="StartContainer for \"764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3\"" Sep 13 01:38:11.757221 systemd[1]: Started cri-containerd-764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3.scope. Sep 13 01:38:11.791878 env[1475]: time="2025-09-13T01:38:11.791825403Z" level=info msg="StartContainer for \"764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3\" returns successfully" Sep 13 01:38:11.803158 systemd[1]: cri-containerd-764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3.scope: Deactivated successfully. Sep 13 01:38:11.836993 env[1475]: time="2025-09-13T01:38:11.836941202Z" level=info msg="shim disconnected" id=764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3 Sep 13 01:38:11.836993 env[1475]: time="2025-09-13T01:38:11.836987082Z" level=warning msg="cleaning up after shim disconnected" id=764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3 namespace=k8s.io Sep 13 01:38:11.836993 env[1475]: time="2025-09-13T01:38:11.836996722Z" level=info msg="cleaning up dead shim" Sep 13 01:38:11.843214 env[1475]: time="2025-09-13T01:38:11.843170276Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4463 runtime=io.containerd.runc.v2\n" Sep 13 01:38:12.362869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-764233814e309eb9081027cac1b7bf51bb3d08765aefa82e8b3e3f37fa80abe3-rootfs.mount: Deactivated successfully. Sep 13 01:38:12.404456 kubelet[2420]: E0913 01:38:12.404422 2420 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:12.694240 env[1475]: time="2025-09-13T01:38:12.694155507Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:38:12.749997 env[1475]: time="2025-09-13T01:38:12.749953176Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c\"" Sep 13 01:38:12.750720 env[1475]: time="2025-09-13T01:38:12.750659616Z" level=info msg="StartContainer for \"299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c\"" Sep 13 01:38:12.773515 systemd[1]: Started cri-containerd-299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c.scope. Sep 13 01:38:12.806014 systemd[1]: cri-containerd-299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c.scope: Deactivated successfully. Sep 13 01:38:12.809030 env[1475]: time="2025-09-13T01:38:12.808996563Z" level=info msg="StartContainer for \"299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c\" returns successfully" Sep 13 01:38:12.848796 env[1475]: time="2025-09-13T01:38:12.848752487Z" level=info msg="shim disconnected" id=299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c Sep 13 01:38:12.849210 env[1475]: time="2025-09-13T01:38:12.849191767Z" level=warning msg="cleaning up after shim disconnected" id=299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c namespace=k8s.io Sep 13 01:38:12.849319 env[1475]: time="2025-09-13T01:38:12.849303567Z" level=info msg="cleaning up dead shim" Sep 13 01:38:12.856421 env[1475]: time="2025-09-13T01:38:12.856390600Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4523 runtime=io.containerd.runc.v2\n" Sep 13 01:38:13.290168 kubelet[2420]: E0913 01:38:13.290116 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podUID="6504e50c-1684-4c0c-a591-5ed9b4788fcb" Sep 13 01:38:13.362917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-299a973be858d859f39c09c5d44e2740dcb965acbe95403876fe5334cde3aa4c-rootfs.mount: Deactivated successfully. Sep 13 01:38:13.699806 env[1475]: time="2025-09-13T01:38:13.699756167Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:38:13.749787 env[1475]: time="2025-09-13T01:38:13.749734603Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07\"" Sep 13 01:38:13.750405 env[1475]: time="2025-09-13T01:38:13.750372202Z" level=info msg="StartContainer for \"7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07\"" Sep 13 01:38:13.770236 systemd[1]: Started cri-containerd-7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07.scope. Sep 13 01:38:13.796284 systemd[1]: cri-containerd-7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07.scope: Deactivated successfully. Sep 13 01:38:13.797928 env[1475]: time="2025-09-13T01:38:13.797831240Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47001ffb_82a9_4801_93d1_a0f6abfcf80c.slice/cri-containerd-7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07.scope/memory.events\": no such file or directory" Sep 13 01:38:13.807271 env[1475]: time="2025-09-13T01:38:13.807212712Z" level=info msg="StartContainer for \"7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07\" returns successfully" Sep 13 01:38:13.844418 env[1475]: time="2025-09-13T01:38:13.844364079Z" level=info msg="shim disconnected" id=7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07 Sep 13 01:38:13.844714 env[1475]: time="2025-09-13T01:38:13.844683638Z" level=warning msg="cleaning up after shim disconnected" id=7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07 namespace=k8s.io Sep 13 01:38:13.844794 env[1475]: time="2025-09-13T01:38:13.844780478Z" level=info msg="cleaning up dead shim" Sep 13 01:38:13.854771 env[1475]: time="2025-09-13T01:38:13.854736389Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4577 runtime=io.containerd.runc.v2\n" Sep 13 01:38:14.362955 systemd[1]: run-containerd-runc-k8s.io-7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07-runc.2Nwnqh.mount: Deactivated successfully. Sep 13 01:38:14.363050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f5a4ccdec69bfbb91465e4e3daaa85f523812561c4f704056734a46b3c52b07-rootfs.mount: Deactivated successfully. Sep 13 01:38:14.702213 env[1475]: time="2025-09-13T01:38:14.702168643Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:38:14.770005 env[1475]: time="2025-09-13T01:38:14.769957383Z" level=info msg="CreateContainer within sandbox \"20aad5c382e65a82ad1301809cd50aee1c3e312a804a40624097c001a52b92d5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344\"" Sep 13 01:38:14.771038 env[1475]: time="2025-09-13T01:38:14.770993262Z" level=info msg="StartContainer for \"af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344\"" Sep 13 01:38:14.794395 systemd[1]: Started cri-containerd-af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344.scope. Sep 13 01:38:14.827936 env[1475]: time="2025-09-13T01:38:14.827873132Z" level=info msg="StartContainer for \"af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344\" returns successfully" Sep 13 01:38:15.289799 kubelet[2420]: E0913 01:38:15.289752 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podUID="6504e50c-1684-4c0c-a591-5ed9b4788fcb" Sep 13 01:38:15.321772 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 01:38:15.722555 kubelet[2420]: I0913 01:38:15.722507 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m8qw5" podStartSLOduration=7.722479074 podStartE2EDuration="7.722479074s" podCreationTimestamp="2025-09-13 01:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:38:15.722308634 +0000 UTC m=+193.556843226" watchObservedRunningTime="2025-09-13 01:38:15.722479074 +0000 UTC m=+193.557013666" Sep 13 01:38:16.013304 systemd[1]: run-containerd-runc-k8s.io-af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344-runc.VkhQTw.mount: Deactivated successfully. Sep 13 01:38:17.290673 kubelet[2420]: E0913 01:38:17.290616 2420 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-m7p6w" podUID="6504e50c-1684-4c0c-a591-5ed9b4788fcb" Sep 13 01:38:17.972648 systemd-networkd[1638]: lxc_health: Link UP Sep 13 01:38:18.005351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:38:18.006366 systemd-networkd[1638]: lxc_health: Gained carrier Sep 13 01:38:19.752356 systemd-networkd[1638]: lxc_health: Gained IPv6LL Sep 13 01:38:20.317310 systemd[1]: run-containerd-runc-k8s.io-af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344-runc.3jxZzL.mount: Deactivated successfully. Sep 13 01:38:24.573967 systemd[1]: run-containerd-runc-k8s.io-af80d5ae01b015ceda805a006920b37680e594c9bc1865483e46852ef1980344-runc.w01MYU.mount: Deactivated successfully. Sep 13 01:38:24.694712 sshd[4266]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:24.697581 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:38:24.698489 systemd[1]: sshd@23-10.200.20.15:22-10.200.16.10:58098.service: Deactivated successfully. Sep 13 01:38:24.699292 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:38:24.700134 systemd-logind[1463]: Removed session 26.