Sep 6 01:19:48.133710 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 01:19:48.133728 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 01:19:48.133736 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 6 01:19:48.133743 kernel: printk: bootconsole [pl11] enabled Sep 6 01:19:48.133748 kernel: efi: EFI v2.70 by EDK II Sep 6 01:19:48.133753 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 6 01:19:48.133760 kernel: random: crng init done Sep 6 01:19:48.133765 kernel: ACPI: Early table checksum verification disabled Sep 6 01:19:48.133771 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 6 01:19:48.133776 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133781 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133787 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 6 01:19:48.133793 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133799 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133805 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133811 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133817 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133824 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133830 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 6 01:19:48.133835 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 6 01:19:48.133841 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 6 01:19:48.133847 kernel: NUMA: Failed to initialise from firmware Sep 6 01:19:48.133852 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:19:48.133858 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Sep 6 01:19:48.133864 kernel: Zone ranges: Sep 6 01:19:48.133869 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 6 01:19:48.133875 kernel: DMA32 empty Sep 6 01:19:48.133880 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:19:48.133887 kernel: Movable zone start for each node Sep 6 01:19:48.133892 kernel: Early memory node ranges Sep 6 01:19:48.133898 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 6 01:19:48.133904 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 6 01:19:48.133909 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 6 01:19:48.133915 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 6 01:19:48.133921 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 6 01:19:48.133926 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 6 01:19:48.133932 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 6 01:19:48.133937 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 6 01:19:48.133943 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 6 01:19:48.133949 kernel: psci: probing for conduit method from ACPI. Sep 6 01:19:48.133958 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 01:19:48.133964 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 01:19:48.133970 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 6 01:19:48.133976 kernel: psci: SMC Calling Convention v1.4 Sep 6 01:19:48.133982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 6 01:19:48.133989 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 6 01:19:48.133995 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 01:19:48.134001 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 01:19:48.134007 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 6 01:19:48.134013 kernel: Detected PIPT I-cache on CPU0 Sep 6 01:19:48.134019 kernel: CPU features: detected: GIC system register CPU interface Sep 6 01:19:48.134026 kernel: CPU features: detected: Hardware dirty bit management Sep 6 01:19:48.134031 kernel: CPU features: detected: Spectre-BHB Sep 6 01:19:48.134038 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 01:19:48.134044 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 01:19:48.134050 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 01:19:48.134057 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 6 01:19:48.134063 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 01:19:48.134069 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 6 01:19:48.134075 kernel: Policy zone: Normal Sep 6 01:19:48.134082 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:19:48.134089 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:19:48.134095 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 01:19:48.134101 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:19:48.134107 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:19:48.134113 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 6 01:19:48.134120 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Sep 6 01:19:48.134127 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 01:19:48.134133 kernel: trace event string verifier disabled Sep 6 01:19:48.134139 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 01:19:48.134146 kernel: rcu: RCU event tracing is enabled. Sep 6 01:19:48.134152 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 01:19:48.134158 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 01:19:48.134164 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:19:48.134171 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:19:48.134177 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 01:19:48.142248 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 01:19:48.142257 kernel: GICv3: 960 SPIs implemented Sep 6 01:19:48.142268 kernel: GICv3: 0 Extended SPIs implemented Sep 6 01:19:48.142274 kernel: GICv3: Distributor has no Range Selector support Sep 6 01:19:48.142281 kernel: Root IRQ handler: gic_handle_irq Sep 6 01:19:48.142287 kernel: GICv3: 16 PPIs implemented Sep 6 01:19:48.142293 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 6 01:19:48.142299 kernel: ITS: No ITS available, not enabling LPIs Sep 6 01:19:48.142306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:19:48.142312 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 01:19:48.142319 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 01:19:48.142325 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 01:19:48.142332 kernel: Console: colour dummy device 80x25 Sep 6 01:19:48.142340 kernel: printk: console [tty1] enabled Sep 6 01:19:48.142346 kernel: ACPI: Core revision 20210730 Sep 6 01:19:48.142353 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 01:19:48.142360 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:19:48.142366 kernel: LSM: Security Framework initializing Sep 6 01:19:48.142372 kernel: SELinux: Initializing. Sep 6 01:19:48.142378 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:19:48.142385 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 01:19:48.142391 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 6 01:19:48.142399 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 6 01:19:48.142405 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:19:48.142412 kernel: Remapping and enabling EFI services. Sep 6 01:19:48.142418 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:19:48.142424 kernel: Detected PIPT I-cache on CPU1 Sep 6 01:19:48.142431 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 6 01:19:48.142437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 01:19:48.142443 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 01:19:48.142449 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 01:19:48.142456 kernel: SMP: Total of 2 processors activated. Sep 6 01:19:48.142463 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 01:19:48.142470 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 6 01:19:48.142476 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 01:19:48.142482 kernel: CPU features: detected: CRC32 instructions Sep 6 01:19:48.142489 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 01:19:48.142495 kernel: CPU features: detected: LSE atomic instructions Sep 6 01:19:48.142502 kernel: CPU features: detected: Privileged Access Never Sep 6 01:19:48.142509 kernel: CPU: All CPU(s) started at EL1 Sep 6 01:19:48.142515 kernel: alternatives: patching kernel code Sep 6 01:19:48.142523 kernel: devtmpfs: initialized Sep 6 01:19:48.142534 kernel: KASLR enabled Sep 6 01:19:48.142541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:19:48.142549 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 01:19:48.142556 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:19:48.142562 kernel: SMBIOS 3.1.0 present. Sep 6 01:19:48.142569 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 6 01:19:48.142576 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:19:48.142582 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 01:19:48.142591 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 01:19:48.142597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 01:19:48.142604 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:19:48.142611 kernel: audit: type=2000 audit(0.091:1): state=initialized audit_enabled=0 res=1 Sep 6 01:19:48.142618 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:19:48.142625 kernel: cpuidle: using governor menu Sep 6 01:19:48.142631 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 01:19:48.142639 kernel: ASID allocator initialised with 32768 entries Sep 6 01:19:48.142646 kernel: ACPI: bus type PCI registered Sep 6 01:19:48.142652 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:19:48.142659 kernel: Serial: AMBA PL011 UART driver Sep 6 01:19:48.142665 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:19:48.142672 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 01:19:48.142679 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:19:48.142686 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 01:19:48.142692 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:19:48.142700 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 01:19:48.142707 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:19:48.142714 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:19:48.142720 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:19:48.142727 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:19:48.142733 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:19:48.142740 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:19:48.142747 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 01:19:48.142753 kernel: ACPI: Interpreter enabled Sep 6 01:19:48.142761 kernel: ACPI: Using GIC for interrupt routing Sep 6 01:19:48.142768 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 6 01:19:48.142774 kernel: printk: console [ttyAMA0] enabled Sep 6 01:19:48.142781 kernel: printk: bootconsole [pl11] disabled Sep 6 01:19:48.142788 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 6 01:19:48.142795 kernel: iommu: Default domain type: Translated Sep 6 01:19:48.142801 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 01:19:48.142808 kernel: vgaarb: loaded Sep 6 01:19:48.142815 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:19:48.142821 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:19:48.142829 kernel: PTP clock support registered Sep 6 01:19:48.142835 kernel: Registered efivars operations Sep 6 01:19:48.142842 kernel: No ACPI PMU IRQ for CPU0 Sep 6 01:19:48.142849 kernel: No ACPI PMU IRQ for CPU1 Sep 6 01:19:48.142855 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 01:19:48.142862 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:19:48.142869 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:19:48.142875 kernel: pnp: PnP ACPI init Sep 6 01:19:48.142882 kernel: pnp: PnP ACPI: found 0 devices Sep 6 01:19:48.142890 kernel: NET: Registered PF_INET protocol family Sep 6 01:19:48.142896 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 01:19:48.142903 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 01:19:48.142910 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:19:48.142917 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 01:19:48.142924 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 01:19:48.142930 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 01:19:48.142937 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:19:48.142945 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 01:19:48.142951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:19:48.142958 kernel: PCI: CLS 0 bytes, default 64 Sep 6 01:19:48.142964 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 6 01:19:48.142971 kernel: kvm [1]: HYP mode not available Sep 6 01:19:48.142978 kernel: Initialise system trusted keyrings Sep 6 01:19:48.142984 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 01:19:48.142991 kernel: Key type asymmetric registered Sep 6 01:19:48.142997 kernel: Asymmetric key parser 'x509' registered Sep 6 01:19:48.143005 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:19:48.143012 kernel: io scheduler mq-deadline registered Sep 6 01:19:48.143018 kernel: io scheduler kyber registered Sep 6 01:19:48.143025 kernel: io scheduler bfq registered Sep 6 01:19:48.143032 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:19:48.143038 kernel: thunder_xcv, ver 1.0 Sep 6 01:19:48.143045 kernel: thunder_bgx, ver 1.0 Sep 6 01:19:48.143051 kernel: nicpf, ver 1.0 Sep 6 01:19:48.143057 kernel: nicvf, ver 1.0 Sep 6 01:19:48.143176 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 01:19:48.143282 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T01:19:47 UTC (1757121587) Sep 6 01:19:48.143291 kernel: efifb: probing for efifb Sep 6 01:19:48.143299 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 6 01:19:48.143306 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 6 01:19:48.143312 kernel: efifb: scrolling: redraw Sep 6 01:19:48.143319 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 01:19:48.143326 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:19:48.143334 kernel: fb0: EFI VGA frame buffer device Sep 6 01:19:48.143341 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 6 01:19:48.143348 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:19:48.143354 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:19:48.143361 kernel: Segment Routing with IPv6 Sep 6 01:19:48.143368 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:19:48.143374 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:19:48.143381 kernel: Key type dns_resolver registered Sep 6 01:19:48.143387 kernel: registered taskstats version 1 Sep 6 01:19:48.143394 kernel: Loading compiled-in X.509 certificates Sep 6 01:19:48.143402 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 01:19:48.143408 kernel: Key type .fscrypt registered Sep 6 01:19:48.143415 kernel: Key type fscrypt-provisioning registered Sep 6 01:19:48.143422 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 01:19:48.143429 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:19:48.143435 kernel: ima: No architecture policies found Sep 6 01:19:48.143442 kernel: clk: Disabling unused clocks Sep 6 01:19:48.143448 kernel: Freeing unused kernel memory: 36416K Sep 6 01:19:48.143456 kernel: Run /init as init process Sep 6 01:19:48.143463 kernel: with arguments: Sep 6 01:19:48.143470 kernel: /init Sep 6 01:19:48.143476 kernel: with environment: Sep 6 01:19:48.143483 kernel: HOME=/ Sep 6 01:19:48.143489 kernel: TERM=linux Sep 6 01:19:48.143496 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:19:48.143504 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:19:48.143515 systemd[1]: Detected virtualization microsoft. Sep 6 01:19:48.143522 systemd[1]: Detected architecture arm64. Sep 6 01:19:48.143529 systemd[1]: Running in initrd. Sep 6 01:19:48.143536 systemd[1]: No hostname configured, using default hostname. Sep 6 01:19:48.143542 systemd[1]: Hostname set to . Sep 6 01:19:48.143550 systemd[1]: Initializing machine ID from random generator. Sep 6 01:19:48.143557 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:19:48.143564 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:19:48.143572 systemd[1]: Reached target cryptsetup.target. Sep 6 01:19:48.143580 systemd[1]: Reached target paths.target. Sep 6 01:19:48.143587 systemd[1]: Reached target slices.target. Sep 6 01:19:48.143594 systemd[1]: Reached target swap.target. Sep 6 01:19:48.143601 systemd[1]: Reached target timers.target. Sep 6 01:19:48.143610 systemd[1]: Listening on iscsid.socket. Sep 6 01:19:48.143617 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:19:48.143625 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:19:48.143633 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:19:48.143640 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:19:48.143647 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:19:48.143654 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:19:48.143661 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:19:48.143668 systemd[1]: Reached target sockets.target. Sep 6 01:19:48.143676 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:19:48.143683 systemd[1]: Finished network-cleanup.service. Sep 6 01:19:48.143690 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:19:48.143698 systemd[1]: Starting systemd-journald.service... Sep 6 01:19:48.143705 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:19:48.143712 systemd[1]: Starting systemd-resolved.service... Sep 6 01:19:48.143723 systemd-journald[276]: Journal started Sep 6 01:19:48.143765 systemd-journald[276]: Runtime Journal (/run/log/journal/42d0448c472d4607a1ec0271c6891b0c) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:19:48.137125 systemd-modules-load[277]: Inserted module 'overlay' Sep 6 01:19:48.179932 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:19:48.179980 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:19:48.191347 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 6 01:19:48.196974 systemd[1]: Started systemd-journald.service. Sep 6 01:19:48.196996 kernel: Bridge firewalling registered Sep 6 01:19:48.192110 systemd-resolved[278]: Positive Trust Anchors: Sep 6 01:19:48.192117 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:19:48.243023 kernel: SCSI subsystem initialized Sep 6 01:19:48.243047 kernel: audit: type=1130 audit(1757121588.219:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.192145 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:19:48.295939 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:19:48.196856 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 6 01:19:48.337552 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:19:48.337583 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:19:48.337592 kernel: audit: type=1130 audit(1757121588.311:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.219896 systemd[1]: Started systemd-resolved.service. Sep 6 01:19:48.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.312057 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:19:48.404261 kernel: audit: type=1130 audit(1757121588.343:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.404285 kernel: audit: type=1130 audit(1757121588.373:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.343287 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 6 01:19:48.344268 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:19:48.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.374080 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:19:48.469450 kernel: audit: type=1130 audit(1757121588.406:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.469476 kernel: audit: type=1130 audit(1757121588.437:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.407067 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:19:48.437665 systemd[1]: Reached target nss-lookup.target. Sep 6 01:19:48.464345 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:19:48.489848 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:19:48.498512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:19:48.543207 kernel: audit: type=1130 audit(1757121588.521:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.506215 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:19:48.568253 kernel: audit: type=1130 audit(1757121588.545:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.521496 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:19:48.545988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:19:48.601911 kernel: audit: type=1130 audit(1757121588.570:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.571419 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:19:48.608111 dracut-cmdline[299]: dracut-dracut-053 Sep 6 01:19:48.613766 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 01:19:48.706229 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:19:48.722207 kernel: iscsi: registered transport (tcp) Sep 6 01:19:48.744204 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:19:48.744276 kernel: QLogic iSCSI HBA Driver Sep 6 01:19:48.773460 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:19:48.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:48.779123 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:19:48.835217 kernel: raid6: neonx8 gen() 13732 MB/s Sep 6 01:19:48.857203 kernel: raid6: neonx8 xor() 10714 MB/s Sep 6 01:19:48.878202 kernel: raid6: neonx4 gen() 13397 MB/s Sep 6 01:19:48.899193 kernel: raid6: neonx4 xor() 11265 MB/s Sep 6 01:19:48.920199 kernel: raid6: neonx2 gen() 12930 MB/s Sep 6 01:19:48.940211 kernel: raid6: neonx2 xor() 10226 MB/s Sep 6 01:19:48.960198 kernel: raid6: neonx1 gen() 10634 MB/s Sep 6 01:19:48.982201 kernel: raid6: neonx1 xor() 8777 MB/s Sep 6 01:19:49.003200 kernel: raid6: int64x8 gen() 6256 MB/s Sep 6 01:19:49.023210 kernel: raid6: int64x8 xor() 3540 MB/s Sep 6 01:19:49.044197 kernel: raid6: int64x4 gen() 7174 MB/s Sep 6 01:19:49.064196 kernel: raid6: int64x4 xor() 3855 MB/s Sep 6 01:19:49.084197 kernel: raid6: int64x2 gen() 6155 MB/s Sep 6 01:19:49.105209 kernel: raid6: int64x2 xor() 3319 MB/s Sep 6 01:19:49.125194 kernel: raid6: int64x1 gen() 5043 MB/s Sep 6 01:19:49.151688 kernel: raid6: int64x1 xor() 2646 MB/s Sep 6 01:19:49.151721 kernel: raid6: using algorithm neonx8 gen() 13732 MB/s Sep 6 01:19:49.151731 kernel: raid6: .... xor() 10714 MB/s, rmw enabled Sep 6 01:19:49.156522 kernel: raid6: using neon recovery algorithm Sep 6 01:19:49.177870 kernel: xor: measuring software checksum speed Sep 6 01:19:49.177912 kernel: 8regs : 17235 MB/sec Sep 6 01:19:49.182080 kernel: 32regs : 20681 MB/sec Sep 6 01:19:49.190901 kernel: arm64_neon : 25833 MB/sec Sep 6 01:19:49.190921 kernel: xor: using function: arm64_neon (25833 MB/sec) Sep 6 01:19:49.247203 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 01:19:49.256293 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:19:49.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:49.264000 audit: BPF prog-id=7 op=LOAD Sep 6 01:19:49.265000 audit: BPF prog-id=8 op=LOAD Sep 6 01:19:49.265774 systemd[1]: Starting systemd-udevd.service... Sep 6 01:19:49.283656 systemd-udevd[475]: Using default interface naming scheme 'v252'. Sep 6 01:19:49.290923 systemd[1]: Started systemd-udevd.service. Sep 6 01:19:49.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:49.302535 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:19:49.315984 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 6 01:19:49.344828 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:19:49.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:49.350500 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:19:49.387491 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:19:49.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:49.448213 kernel: hv_vmbus: Vmbus version:5.3 Sep 6 01:19:49.467206 kernel: hv_vmbus: registering driver hid_hyperv Sep 6 01:19:49.467249 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 6 01:19:49.467259 kernel: hv_vmbus: registering driver hv_netvsc Sep 6 01:19:49.497666 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 6 01:19:49.497735 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 6 01:19:49.497746 kernel: hv_vmbus: registering driver hv_storvsc Sep 6 01:19:49.497762 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 6 01:19:49.508466 kernel: scsi host1: storvsc_host_t Sep 6 01:19:49.511222 kernel: scsi host0: storvsc_host_t Sep 6 01:19:49.523094 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 6 01:19:49.532216 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 6 01:19:49.552153 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 6 01:19:49.575145 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 6 01:19:49.575167 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 6 01:19:49.586909 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 6 01:19:49.587014 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 6 01:19:49.587091 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 6 01:19:49.587169 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 6 01:19:49.587277 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 6 01:19:49.587362 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:19:49.587379 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 6 01:19:49.595218 kernel: hv_netvsc 000d3afd-f350-000d-3afd-f350000d3afd eth0: VF slot 1 added Sep 6 01:19:49.604220 kernel: hv_vmbus: registering driver hv_pci Sep 6 01:19:49.616997 kernel: hv_pci b7a2dee8-f2c6-44ef-838e-1180c2bc2633: PCI VMBus probing: Using version 0x10004 Sep 6 01:19:49.698816 kernel: hv_pci b7a2dee8-f2c6-44ef-838e-1180c2bc2633: PCI host bridge to bus f2c6:00 Sep 6 01:19:49.698940 kernel: pci_bus f2c6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 6 01:19:49.699041 kernel: pci_bus f2c6:00: No busn resource found for root bus, will use [bus 00-ff] Sep 6 01:19:49.699119 kernel: pci f2c6:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 6 01:19:49.699230 kernel: pci f2c6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:19:49.699315 kernel: pci f2c6:00:02.0: enabling Extended Tags Sep 6 01:19:49.699394 kernel: pci f2c6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f2c6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 6 01:19:49.699470 kernel: pci_bus f2c6:00: busn_res: [bus 00-ff] end is updated to 00 Sep 6 01:19:49.699541 kernel: pci f2c6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 6 01:19:49.739052 kernel: mlx5_core f2c6:00:02.0: enabling device (0000 -> 0002) Sep 6 01:19:49.970994 kernel: mlx5_core f2c6:00:02.0: firmware version: 16.30.1284 Sep 6 01:19:49.971115 kernel: mlx5_core f2c6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 6 01:19:49.971217 kernel: hv_netvsc 000d3afd-f350-000d-3afd-f350000d3afd eth0: VF registering: eth1 Sep 6 01:19:49.971302 kernel: mlx5_core f2c6:00:02.0 eth1: joined to eth0 Sep 6 01:19:49.980210 kernel: mlx5_core f2c6:00:02.0 enP62150s1: renamed from eth1 Sep 6 01:19:50.156218 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (532) Sep 6 01:19:50.157059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:19:50.176298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:19:50.275047 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:19:50.285856 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:19:50.295749 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:19:50.310735 systemd[1]: Starting disk-uuid.service... Sep 6 01:19:51.344209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:19:51.344644 disk-uuid[595]: The operation has completed successfully. Sep 6 01:19:51.397687 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:19:51.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.397802 systemd[1]: Finished disk-uuid.service. Sep 6 01:19:51.410939 systemd[1]: Starting verity-setup.service... Sep 6 01:19:51.451200 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 01:19:51.623949 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:19:51.629463 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:19:51.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.636634 systemd[1]: Finished verity-setup.service. Sep 6 01:19:51.690921 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:19:51.699032 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:19:51.695601 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:19:51.696388 systemd[1]: Starting ignition-setup.service... Sep 6 01:19:51.703870 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:19:51.740798 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:19:51.740853 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:19:51.746472 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:19:51.787881 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:19:51.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.797000 audit: BPF prog-id=9 op=LOAD Sep 6 01:19:51.797869 systemd[1]: Starting systemd-networkd.service... Sep 6 01:19:51.812642 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:19:51.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.826810 systemd-networkd[868]: lo: Link UP Sep 6 01:19:51.826817 systemd-networkd[868]: lo: Gained carrier Sep 6 01:19:51.827255 systemd-networkd[868]: Enumeration completed Sep 6 01:19:51.827335 systemd[1]: Started systemd-networkd.service. Sep 6 01:19:51.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.829073 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:19:51.832100 systemd[1]: Reached target network.target. Sep 6 01:19:51.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.837561 systemd[1]: Starting iscsiuio.service... Sep 6 01:19:51.885775 iscsid[877]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:19:51.885775 iscsid[877]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 01:19:51.885775 iscsid[877]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:19:51.885775 iscsid[877]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:19:51.885775 iscsid[877]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:19:51.885775 iscsid[877]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:19:51.885775 iscsid[877]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:19:52.010309 kernel: kauditd_printk_skb: 16 callbacks suppressed Sep 6 01:19:52.010333 kernel: audit: type=1130 audit(1757121591.980:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.847859 systemd[1]: Started iscsiuio.service. Sep 6 01:19:52.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.856777 systemd[1]: Starting iscsid.service... Sep 6 01:19:52.038608 kernel: audit: type=1130 audit(1757121592.014:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:51.871397 systemd[1]: Started iscsid.service. Sep 6 01:19:51.876676 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:19:51.898262 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:19:51.904399 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:19:51.928361 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:19:51.947599 systemd[1]: Reached target remote-fs.target. Sep 6 01:19:51.953072 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:19:51.975778 systemd[1]: Finished ignition-setup.service. Sep 6 01:19:51.996450 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:19:52.038005 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:19:52.106201 kernel: mlx5_core f2c6:00:02.0 enP62150s1: Link up Sep 6 01:19:52.146200 kernel: hv_netvsc 000d3afd-f350-000d-3afd-f350000d3afd eth0: Data path switched to VF: enP62150s1 Sep 6 01:19:52.146368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:19:52.152153 systemd-networkd[868]: enP62150s1: Link UP Sep 6 01:19:52.153452 systemd-networkd[868]: eth0: Link UP Sep 6 01:19:52.153857 systemd-networkd[868]: eth0: Gained carrier Sep 6 01:19:52.168745 systemd-networkd[868]: enP62150s1: Gained carrier Sep 6 01:19:52.185257 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.25/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:19:53.809332 systemd-networkd[868]: eth0: Gained IPv6LL Sep 6 01:19:54.336099 ignition[892]: Ignition 2.14.0 Sep 6 01:19:54.339265 ignition[892]: Stage: fetch-offline Sep 6 01:19:54.339390 ignition[892]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:54.339440 ignition[892]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:54.414728 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:54.414891 ignition[892]: parsed url from cmdline: "" Sep 6 01:19:54.422383 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:19:54.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.414895 ignition[892]: no config URL provided Sep 6 01:19:54.456698 kernel: audit: type=1130 audit(1757121594.428:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.451378 systemd[1]: Starting ignition-fetch.service... Sep 6 01:19:54.414900 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:19:54.414908 ignition[892]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:19:54.414913 ignition[892]: failed to fetch config: resource requires networking Sep 6 01:19:54.415377 ignition[892]: Ignition finished successfully Sep 6 01:19:54.464283 ignition[899]: Ignition 2.14.0 Sep 6 01:19:54.464289 ignition[899]: Stage: fetch Sep 6 01:19:54.464390 ignition[899]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:54.464408 ignition[899]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:54.466954 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:54.467062 ignition[899]: parsed url from cmdline: "" Sep 6 01:19:54.467065 ignition[899]: no config URL provided Sep 6 01:19:54.467070 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:19:54.467081 ignition[899]: no config at "/usr/lib/ignition/user.ign" Sep 6 01:19:54.467106 ignition[899]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 6 01:19:54.597987 ignition[899]: GET result: OK Sep 6 01:19:54.598079 ignition[899]: config has been read from IMDS userdata Sep 6 01:19:54.601739 unknown[899]: fetched base config from "system" Sep 6 01:19:54.641260 kernel: audit: type=1130 audit(1757121594.614:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.598125 ignition[899]: parsing config with SHA512: d2f3d5349e772267d5de52accfc5a7959963bc008b9682f4d9085ecc39fb39205edb27d81db4a0b9f676190806911a9c3978cbced6af6bf13dc45b4f931c6106 Sep 6 01:19:54.601747 unknown[899]: fetched base config from "system" Sep 6 01:19:54.602382 ignition[899]: fetch: fetch complete Sep 6 01:19:54.601764 unknown[899]: fetched user config from "azure" Sep 6 01:19:54.602387 ignition[899]: fetch: fetch passed Sep 6 01:19:54.692918 kernel: audit: type=1130 audit(1757121594.663:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.603584 systemd[1]: Finished ignition-fetch.service. Sep 6 01:19:54.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.602441 ignition[899]: Ignition finished successfully Sep 6 01:19:54.734536 kernel: audit: type=1130 audit(1757121594.698:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.615980 systemd[1]: Starting ignition-kargs.service... Sep 6 01:19:54.648393 ignition[905]: Ignition 2.14.0 Sep 6 01:19:54.658735 systemd[1]: Finished ignition-kargs.service. Sep 6 01:19:54.648400 ignition[905]: Stage: kargs Sep 6 01:19:54.664965 systemd[1]: Starting ignition-disks.service... Sep 6 01:19:54.648499 ignition[905]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:54.692785 systemd[1]: Finished ignition-disks.service. Sep 6 01:19:54.648520 ignition[905]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:54.698968 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:19:54.651132 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:54.729490 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:19:54.653054 ignition[905]: kargs: kargs passed Sep 6 01:19:54.740531 systemd[1]: Reached target local-fs.target. Sep 6 01:19:54.653102 ignition[905]: Ignition finished successfully Sep 6 01:19:54.750870 systemd[1]: Reached target sysinit.target. Sep 6 01:19:54.674844 ignition[911]: Ignition 2.14.0 Sep 6 01:19:54.759800 systemd[1]: Reached target basic.target. Sep 6 01:19:54.674850 ignition[911]: Stage: disks Sep 6 01:19:54.770632 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:19:54.674959 ignition[911]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:54.674976 ignition[911]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:54.677983 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:54.691234 ignition[911]: disks: disks passed Sep 6 01:19:54.691291 ignition[911]: Ignition finished successfully Sep 6 01:19:54.887635 systemd-fsck[919]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 6 01:19:54.898928 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:19:54.936321 kernel: audit: type=1130 audit(1757121594.905:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:54.909875 systemd[1]: Mounting sysroot.mount... Sep 6 01:19:54.953201 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:19:54.953933 systemd[1]: Mounted sysroot.mount. Sep 6 01:19:54.958957 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:19:54.993921 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:19:55.000013 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 01:19:55.010727 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:19:55.010760 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:19:55.018281 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:19:55.070658 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:19:55.076936 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:19:55.106206 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (929) Sep 6 01:19:55.114847 initrd-setup-root[934]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:19:55.129176 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:19:55.129211 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:19:55.129221 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:19:55.139782 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:19:55.152820 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:19:55.178057 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:19:55.188974 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:19:55.671006 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:19:55.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.677825 systemd[1]: Starting ignition-mount.service... Sep 6 01:19:55.712013 kernel: audit: type=1130 audit(1757121595.676:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.707668 systemd[1]: Starting sysroot-boot.service... Sep 6 01:19:55.726563 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 01:19:55.727030 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 01:19:55.743686 systemd[1]: Finished sysroot-boot.service. Sep 6 01:19:55.773565 kernel: audit: type=1130 audit(1757121595.748:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.773669 ignition[998]: INFO : Ignition 2.14.0 Sep 6 01:19:55.773669 ignition[998]: INFO : Stage: mount Sep 6 01:19:55.773669 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:55.773669 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:55.773669 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:55.773669 ignition[998]: INFO : mount: mount passed Sep 6 01:19:55.773669 ignition[998]: INFO : Ignition finished successfully Sep 6 01:19:55.848121 kernel: audit: type=1130 audit(1757121595.783:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:55.773887 systemd[1]: Finished ignition-mount.service. Sep 6 01:19:56.269329 coreos-metadata[928]: Sep 06 01:19:56.269 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 6 01:19:56.279563 coreos-metadata[928]: Sep 06 01:19:56.279 INFO Fetch successful Sep 6 01:19:56.313719 coreos-metadata[928]: Sep 06 01:19:56.313 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 6 01:19:56.336775 coreos-metadata[928]: Sep 06 01:19:56.336 INFO Fetch successful Sep 6 01:19:56.350697 coreos-metadata[928]: Sep 06 01:19:56.350 INFO wrote hostname ci-3510.3.8-n-dced7724bc to /sysroot/etc/hostname Sep 6 01:19:56.359869 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 01:19:56.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:56.365899 systemd[1]: Starting ignition-files.service... Sep 6 01:19:56.381057 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:19:56.400202 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1007) Sep 6 01:19:56.412215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 01:19:56.412252 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:19:56.412262 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:19:56.421362 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:19:56.438544 ignition[1026]: INFO : Ignition 2.14.0 Sep 6 01:19:56.443200 ignition[1026]: INFO : Stage: files Sep 6 01:19:56.443200 ignition[1026]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:56.443200 ignition[1026]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:56.466492 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:56.466492 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:19:56.466492 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:19:56.466492 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:19:56.525147 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:19:56.533217 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:19:56.541888 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:19:56.540515 unknown[1026]: wrote ssh authorized keys file for user: core Sep 6 01:19:56.555378 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 6 01:19:56.555378 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 6 01:19:56.620238 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 01:19:57.068046 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 6 01:19:57.079689 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:19:57.079689 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 01:19:57.141339 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 01:19:57.221237 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:19:57.231266 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2117133054" Sep 6 01:19:57.365342 ignition[1026]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2117133054": device or resource busy Sep 6 01:19:57.365342 ignition[1026]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2117133054", trying btrfs: device or resource busy Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2117133054" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2117133054" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2117133054" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2117133054" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:19:57.365342 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3781533148" Sep 6 01:19:57.365342 ignition[1026]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3781533148": device or resource busy Sep 6 01:19:57.253771 systemd[1]: mnt-oem2117133054.mount: Deactivated successfully. Sep 6 01:19:57.539426 ignition[1026]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3781533148", trying btrfs: device or resource busy Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3781533148" Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3781533148" Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3781533148" Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3781533148" Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:19:57.539426 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 6 01:19:57.281081 systemd[1]: mnt-oem3781533148.mount: Deactivated successfully. Sep 6 01:19:58.001005 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 6 01:19:58.238219 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:19:58.253503 ignition[1026]: INFO : files: files passed Sep 6 01:19:58.253503 ignition[1026]: INFO : Ignition finished successfully Sep 6 01:19:58.606452 kernel: kauditd_printk_skb: 1 callbacks suppressed Sep 6 01:19:58.606475 kernel: audit: type=1130 audit(1757121598.265:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.606492 kernel: audit: type=1130 audit(1757121598.326:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.606502 kernel: audit: type=1130 audit(1757121598.372:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.606511 kernel: audit: type=1131 audit(1757121598.372:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.606520 kernel: audit: type=1130 audit(1757121598.465:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.606529 kernel: audit: type=1131 audit(1757121598.465:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.606538 kernel: audit: type=1130 audit(1757121598.566:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.253545 systemd[1]: Finished ignition-files.service. Sep 6 01:19:58.268243 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:19:58.634373 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:19:58.300114 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:19:58.681274 kernel: audit: type=1131 audit(1757121598.651:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.300980 systemd[1]: Starting ignition-quench.service... Sep 6 01:19:58.314005 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:19:58.355003 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:19:58.355090 systemd[1]: Finished ignition-quench.service. Sep 6 01:19:58.372540 systemd[1]: Reached target ignition-complete.target. Sep 6 01:19:58.431462 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:19:58.454373 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:19:58.454469 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:19:58.465843 systemd[1]: Reached target initrd-fs.target. Sep 6 01:19:58.514677 systemd[1]: Reached target initrd.target. Sep 6 01:19:58.528650 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:19:58.813142 kernel: audit: type=1131 audit(1757121598.785:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.537026 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:19:58.559715 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:19:58.846225 kernel: audit: type=1131 audit(1757121598.822:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.600491 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:19:58.618706 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:19:58.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.623503 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:19:58.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.629191 systemd[1]: Stopped target timers.target. Sep 6 01:19:58.638541 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:19:58.638658 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:19:58.651507 systemd[1]: Stopped target initrd.target. Sep 6 01:19:58.895437 ignition[1064]: INFO : Ignition 2.14.0 Sep 6 01:19:58.895437 ignition[1064]: INFO : Stage: umount Sep 6 01:19:58.895437 ignition[1064]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:19:58.895437 ignition[1064]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 6 01:19:58.895437 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 6 01:19:58.895437 ignition[1064]: INFO : umount: umount passed Sep 6 01:19:58.895437 ignition[1064]: INFO : Ignition finished successfully Sep 6 01:19:58.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.681378 systemd[1]: Stopped target basic.target. Sep 6 01:19:58.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.685877 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:19:58.696199 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:19:58.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.708413 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:19:58.719860 systemd[1]: Stopped target remote-fs.target. Sep 6 01:19:58.729145 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:19:58.739342 systemd[1]: Stopped target sysinit.target. Sep 6 01:19:58.749122 systemd[1]: Stopped target local-fs.target. Sep 6 01:19:58.760449 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:19:58.769197 systemd[1]: Stopped target swap.target. Sep 6 01:19:59.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.777130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:19:58.777253 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:19:58.785876 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:19:59.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:59.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.813543 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:19:59.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.813655 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:19:59.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.822605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:19:59.120000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:19:58.822701 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:19:58.846681 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:19:58.846769 systemd[1]: Stopped ignition-files.service. Sep 6 01:19:59.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.855077 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 01:19:59.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.855162 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 01:19:59.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.864767 systemd[1]: Stopping ignition-mount.service... Sep 6 01:19:58.882599 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:19:58.891022 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:19:59.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.891268 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:19:58.900246 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:19:58.900352 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:19:59.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.911781 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:19:59.257667 kernel: hv_netvsc 000d3afd-f350-000d-3afd-f350000d3afd eth0: Data path switched from VF: enP62150s1 Sep 6 01:19:59.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:59.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.912551 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:19:59.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.912660 systemd[1]: Stopped ignition-mount.service. Sep 6 01:19:59.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.919483 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:19:59.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.919574 systemd[1]: Stopped ignition-disks.service. Sep 6 01:19:59.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:59.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:58.932213 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:19:58.932262 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:19:58.968331 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 01:19:58.968385 systemd[1]: Stopped ignition-fetch.service. Sep 6 01:19:58.978783 systemd[1]: Stopped target network.target. Sep 6 01:19:58.987845 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:19:58.987903 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:19:58.998086 systemd[1]: Stopped target paths.target. Sep 6 01:19:59.010530 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:19:59.014203 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:19:59.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:59.020500 systemd[1]: Stopped target slices.target. Sep 6 01:19:59.028794 systemd[1]: Stopped target sockets.target. Sep 6 01:19:59.038817 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:19:59.038850 systemd[1]: Closed iscsid.socket. Sep 6 01:19:59.045963 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:19:59.045988 systemd[1]: Closed iscsiuio.socket. Sep 6 01:19:59.053820 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:19:59.053862 systemd[1]: Stopped ignition-setup.service. Sep 6 01:19:59.065249 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:19:59.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:59.072374 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:19:59.082093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:19:59.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:19:59.082223 systemd-networkd[868]: eth0: DHCPv6 lease lost Sep 6 01:19:59.433000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:19:59.082893 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:19:59.091871 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:19:59.091972 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:19:59.102644 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:19:59.102738 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:19:59.113527 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:19:59.113562 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:19:59.498442 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Sep 6 01:19:59.498495 iscsid[877]: iscsid shutting down. Sep 6 01:19:59.126887 systemd[1]: Stopping network-cleanup.service... Sep 6 01:19:59.136143 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:19:59.136289 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:19:59.145670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:19:59.145720 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:19:59.160406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:19:59.160452 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:19:59.165734 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:19:59.180572 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:19:59.191466 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:19:59.191617 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:19:59.197581 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:19:59.197631 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:19:59.209731 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:19:59.209772 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:19:59.218888 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:19:59.218942 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:19:59.229019 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:19:59.229057 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:19:59.245223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:19:59.245267 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:19:59.252333 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:19:59.264429 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 01:19:59.264495 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 01:19:59.276096 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:19:59.276204 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:19:59.281360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:19:59.281406 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:19:59.292269 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 01:19:59.292751 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:19:59.292844 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:19:59.348488 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:19:59.348589 systemd[1]: Stopped network-cleanup.service. Sep 6 01:19:59.401125 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:19:59.401277 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:19:59.407138 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:19:59.418672 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:19:59.418728 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:19:59.428520 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:19:59.448220 systemd[1]: Switching root. Sep 6 01:19:59.499583 systemd-journald[276]: Journal stopped Sep 6 01:20:09.929495 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:20:09.929519 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:20:09.929529 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:20:09.929540 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:20:09.929548 kernel: SELinux: policy capability open_perms=1 Sep 6 01:20:09.929555 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:20:09.929564 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:20:09.929572 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:20:09.929588 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:20:09.929634 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:20:09.929679 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:20:09.929690 systemd[1]: Successfully loaded SELinux policy in 253.350ms. Sep 6 01:20:09.929709 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.836ms. Sep 6 01:20:09.929719 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:20:09.929730 systemd[1]: Detected virtualization microsoft. Sep 6 01:20:09.929740 systemd[1]: Detected architecture arm64. Sep 6 01:20:09.929758 systemd[1]: Detected first boot. Sep 6 01:20:09.929767 systemd[1]: Hostname set to . Sep 6 01:20:09.929776 systemd[1]: Initializing machine ID from random generator. Sep 6 01:20:09.929787 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:20:09.929796 kernel: kauditd_printk_skb: 39 callbacks suppressed Sep 6 01:20:09.929806 kernel: audit: type=1400 audit(1757121603.447:87): avc: denied { associate } for pid=1098 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:20:09.929818 kernel: audit: type=1300 audit(1757121603.447:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400002229c a1=40000283a8 a2=4000026800 a3=32 items=0 ppid=1081 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:09.929828 kernel: audit: type=1327 audit(1757121603.447:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:20:09.929837 kernel: audit: type=1400 audit(1757121603.456:88): avc: denied { associate } for pid=1098 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:20:09.929854 kernel: audit: type=1300 audit(1757121603.456:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022379 a2=1ed a3=0 items=2 ppid=1081 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:09.929863 kernel: audit: type=1307 audit(1757121603.456:88): cwd="/" Sep 6 01:20:09.929873 kernel: audit: type=1302 audit(1757121603.456:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:09.929883 kernel: audit: type=1302 audit(1757121603.456:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:09.929900 kernel: audit: type=1327 audit(1757121603.456:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:20:09.929909 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:20:09.929919 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:20:09.929935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:20:09.929948 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:20:09.929958 kernel: audit: type=1334 audit(1757121609.152:89): prog-id=12 op=LOAD Sep 6 01:20:09.929967 kernel: audit: type=1334 audit(1757121609.152:90): prog-id=3 op=UNLOAD Sep 6 01:20:09.929985 kernel: audit: type=1334 audit(1757121609.160:91): prog-id=13 op=LOAD Sep 6 01:20:09.929993 kernel: audit: type=1334 audit(1757121609.167:92): prog-id=14 op=LOAD Sep 6 01:20:09.930002 kernel: audit: type=1334 audit(1757121609.167:93): prog-id=4 op=UNLOAD Sep 6 01:20:09.930011 kernel: audit: type=1334 audit(1757121609.167:94): prog-id=5 op=UNLOAD Sep 6 01:20:09.930033 kernel: audit: type=1334 audit(1757121609.174:95): prog-id=15 op=LOAD Sep 6 01:20:09.930041 kernel: audit: type=1334 audit(1757121609.174:96): prog-id=12 op=UNLOAD Sep 6 01:20:09.930069 kernel: audit: type=1334 audit(1757121609.181:97): prog-id=16 op=LOAD Sep 6 01:20:09.930089 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:20:09.930099 kernel: audit: type=1334 audit(1757121609.188:98): prog-id=17 op=LOAD Sep 6 01:20:09.930108 systemd[1]: Stopped iscsiuio.service. Sep 6 01:20:09.930118 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:20:09.930127 systemd[1]: Stopped iscsid.service. Sep 6 01:20:09.930146 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 01:20:09.930156 systemd[1]: Stopped initrd-switch-root.service. Sep 6 01:20:09.930166 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 01:20:09.930203 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:20:09.930223 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:20:09.930233 systemd[1]: Created slice system-getty.slice. Sep 6 01:20:09.930242 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:20:09.930255 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:20:09.930264 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:20:09.930273 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:20:09.930285 systemd[1]: Created slice user.slice. Sep 6 01:20:09.930296 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:20:09.930308 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:20:09.930319 systemd[1]: Set up automount boot.automount. Sep 6 01:20:09.930330 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:20:09.930341 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 01:20:09.930350 systemd[1]: Stopped target initrd-fs.target. Sep 6 01:20:09.930360 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 01:20:09.930373 systemd[1]: Reached target integritysetup.target. Sep 6 01:20:09.930384 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:20:09.930393 systemd[1]: Reached target remote-fs.target. Sep 6 01:20:09.930405 systemd[1]: Reached target slices.target. Sep 6 01:20:09.930418 systemd[1]: Reached target swap.target. Sep 6 01:20:09.930437 systemd[1]: Reached target torcx.target. Sep 6 01:20:09.930448 systemd[1]: Reached target veritysetup.target. Sep 6 01:20:09.930457 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:20:09.930470 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:20:09.930480 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:20:09.930489 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:20:09.930504 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:20:09.930513 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:20:09.930522 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:20:09.930533 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:20:09.930543 systemd[1]: Mounting media.mount... Sep 6 01:20:09.930554 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:20:09.930563 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:20:09.930572 systemd[1]: Mounting tmp.mount... Sep 6 01:20:09.930581 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:20:09.930591 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:09.930600 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:20:09.930610 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:20:09.930620 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:09.930629 systemd[1]: Starting modprobe@drm.service... Sep 6 01:20:09.930639 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:09.930648 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:20:09.930657 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:09.930667 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:20:09.930677 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 01:20:09.930686 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 01:20:09.930696 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 01:20:09.930706 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 01:20:09.930715 kernel: fuse: init (API version 7.34) Sep 6 01:20:09.930724 systemd[1]: Stopped systemd-journald.service. Sep 6 01:20:09.930733 kernel: loop: module loaded Sep 6 01:20:09.930742 systemd[1]: systemd-journald.service: Consumed 3.232s CPU time. Sep 6 01:20:09.930754 systemd[1]: Starting systemd-journald.service... Sep 6 01:20:09.930764 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:20:09.930773 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:20:09.930782 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:20:09.930794 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:20:09.930803 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 01:20:09.930812 systemd[1]: Stopped verity-setup.service. Sep 6 01:20:09.930821 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:20:09.930831 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:20:09.930840 systemd[1]: Mounted media.mount. Sep 6 01:20:09.930853 systemd-journald[1204]: Journal started Sep 6 01:20:09.930895 systemd-journald[1204]: Runtime Journal (/run/log/journal/07221de17fc74f9c8bf03b74a2d372b2) is 8.0M, max 78.5M, 70.5M free. Sep 6 01:20:01.458000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 01:20:02.214000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:20:02.214000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:20:02.214000 audit: BPF prog-id=10 op=LOAD Sep 6 01:20:02.214000 audit: BPF prog-id=10 op=UNLOAD Sep 6 01:20:02.214000 audit: BPF prog-id=11 op=LOAD Sep 6 01:20:02.214000 audit: BPF prog-id=11 op=UNLOAD Sep 6 01:20:03.447000 audit[1098]: AVC avc: denied { associate } for pid=1098 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 01:20:03.447000 audit[1098]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400002229c a1=40000283a8 a2=4000026800 a3=32 items=0 ppid=1081 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:03.447000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:20:03.456000 audit[1098]: AVC avc: denied { associate } for pid=1098 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 01:20:03.456000 audit[1098]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022379 a2=1ed a3=0 items=2 ppid=1081 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:03.456000 audit: CWD cwd="/" Sep 6 01:20:03.456000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:03.456000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:03.456000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 01:20:09.152000 audit: BPF prog-id=12 op=LOAD Sep 6 01:20:09.152000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:20:09.160000 audit: BPF prog-id=13 op=LOAD Sep 6 01:20:09.167000 audit: BPF prog-id=14 op=LOAD Sep 6 01:20:09.167000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:20:09.167000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:20:09.174000 audit: BPF prog-id=15 op=LOAD Sep 6 01:20:09.174000 audit: BPF prog-id=12 op=UNLOAD Sep 6 01:20:09.181000 audit: BPF prog-id=16 op=LOAD Sep 6 01:20:09.188000 audit: BPF prog-id=17 op=LOAD Sep 6 01:20:09.188000 audit: BPF prog-id=13 op=UNLOAD Sep 6 01:20:09.188000 audit: BPF prog-id=14 op=UNLOAD Sep 6 01:20:09.195000 audit: BPF prog-id=18 op=LOAD Sep 6 01:20:09.195000 audit: BPF prog-id=15 op=UNLOAD Sep 6 01:20:09.203000 audit: BPF prog-id=19 op=LOAD Sep 6 01:20:09.210000 audit: BPF prog-id=20 op=LOAD Sep 6 01:20:09.210000 audit: BPF prog-id=16 op=UNLOAD Sep 6 01:20:09.210000 audit: BPF prog-id=17 op=UNLOAD Sep 6 01:20:09.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.232000 audit: BPF prog-id=18 op=UNLOAD Sep 6 01:20:09.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.813000 audit: BPF prog-id=21 op=LOAD Sep 6 01:20:09.813000 audit: BPF prog-id=22 op=LOAD Sep 6 01:20:09.813000 audit: BPF prog-id=23 op=LOAD Sep 6 01:20:09.813000 audit: BPF prog-id=19 op=UNLOAD Sep 6 01:20:09.813000 audit: BPF prog-id=20 op=UNLOAD Sep 6 01:20:09.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.925000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:20:09.925000 audit[1204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd9d7de40 a2=4000 a3=1 items=0 ppid=1 pid=1204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:09.925000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:20:09.150986 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:20:03.397613 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:20:09.150998 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 01:20:03.397882 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:20:09.212037 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 01:20:03.397913 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:20:09.212416 systemd[1]: systemd-journald.service: Consumed 3.232s CPU time. Sep 6 01:20:03.397948 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 01:20:03.397958 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 01:20:03.397987 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 01:20:03.397999 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 01:20:03.398221 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 01:20:03.398255 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 01:20:03.398266 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 01:20:03.427204 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 01:20:03.427285 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 01:20:03.427338 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 01:20:03.427366 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 01:20:03.427400 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 01:20:03.427417 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 01:20:08.239946 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:08Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:20:08.240223 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:08Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:20:08.240330 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:08Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:20:08.240492 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:08Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 01:20:08.240539 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:08Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 01:20:08.240594 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2025-09-06T01:20:08Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 01:20:09.949408 systemd[1]: Started systemd-journald.service. Sep 6 01:20:09.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.950240 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:20:09.955563 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:20:09.960910 systemd[1]: Mounted tmp.mount. Sep 6 01:20:09.965784 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:20:09.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.971473 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:20:09.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.977328 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:20:09.977550 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:20:09.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.983855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:09.984097 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:09.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.990320 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:20:09.990515 systemd[1]: Finished modprobe@drm.service. Sep 6 01:20:09.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:09.997391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:09.997606 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:10.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.003983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:20:10.004168 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:20:10.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.010451 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:10.010642 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:10.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.016320 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:20:10.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.022161 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:20:10.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.028811 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:20:10.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.034810 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:20:10.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.041281 systemd[1]: Reached target network-pre.target. Sep 6 01:20:10.048001 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:20:10.054636 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:20:10.060296 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:20:10.061788 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:20:10.068351 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:20:10.073777 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:20:10.074932 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:20:10.080412 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:20:10.081539 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:20:10.088040 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:20:10.094492 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:20:10.101619 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:20:10.110521 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:20:10.119837 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 01:20:10.125235 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:20:10.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.131271 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:20:10.133076 systemd-journald[1204]: Time spent on flushing to /var/log/journal/07221de17fc74f9c8bf03b74a2d372b2 is 13.782ms for 1112 entries. Sep 6 01:20:10.133076 systemd-journald[1204]: System Journal (/var/log/journal/07221de17fc74f9c8bf03b74a2d372b2) is 8.0M, max 2.6G, 2.6G free. Sep 6 01:20:10.201054 systemd-journald[1204]: Received client request to flush runtime journal. Sep 6 01:20:10.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.176582 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:20:10.202012 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:20:10.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.644643 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:20:10.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:10.650840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:20:10.981299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:20:10.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.101969 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:20:11.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.108000 audit: BPF prog-id=24 op=LOAD Sep 6 01:20:11.108000 audit: BPF prog-id=25 op=LOAD Sep 6 01:20:11.108000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:20:11.108000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:20:11.109245 systemd[1]: Starting systemd-udevd.service... Sep 6 01:20:11.127881 systemd-udevd[1223]: Using default interface naming scheme 'v252'. Sep 6 01:20:11.249689 systemd[1]: Started systemd-udevd.service. Sep 6 01:20:11.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.261000 audit: BPF prog-id=26 op=LOAD Sep 6 01:20:11.262552 systemd[1]: Starting systemd-networkd.service... Sep 6 01:20:11.296331 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 6 01:20:11.327045 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:20:11.325000 audit: BPF prog-id=27 op=LOAD Sep 6 01:20:11.326000 audit: BPF prog-id=28 op=LOAD Sep 6 01:20:11.326000 audit: BPF prog-id=29 op=LOAD Sep 6 01:20:11.354214 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:20:11.368000 audit[1232]: AVC avc: denied { confidentiality } for pid=1232 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:20:11.384226 kernel: hv_vmbus: registering driver hv_balloon Sep 6 01:20:11.384289 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 6 01:20:11.395696 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 6 01:20:11.368000 audit[1232]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad34c7100 a1=aa2c a2=ffffbc4924b0 a3=aaaad3425010 items=12 ppid=1223 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:11.368000 audit: CWD cwd="/" Sep 6 01:20:11.368000 audit: PATH item=0 name=(null) inode=6839 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=1 name=(null) inode=10714 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=2 name=(null) inode=10714 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=3 name=(null) inode=10715 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=4 name=(null) inode=10714 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=5 name=(null) inode=10716 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=6 name=(null) inode=10714 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=7 name=(null) inode=10717 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=8 name=(null) inode=10714 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=9 name=(null) inode=10718 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=10 name=(null) inode=10714 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PATH item=11 name=(null) inode=10719 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:20:11.368000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:20:11.407214 kernel: hv_vmbus: registering driver hyperv_fb Sep 6 01:20:11.407289 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 6 01:20:11.416790 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 6 01:20:11.422251 systemd[1]: Started systemd-userdbd.service. Sep 6 01:20:11.432326 kernel: hv_utils: Registering HyperV Utility Driver Sep 6 01:20:11.432388 kernel: Console: switching to colour dummy device 80x25 Sep 6 01:20:11.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.438113 kernel: hv_vmbus: registering driver hv_utils Sep 6 01:20:10.994929 kernel: hv_utils: Heartbeat IC version 3.0 Sep 6 01:20:11.048757 kernel: hv_utils: Shutdown IC version 3.2 Sep 6 01:20:11.048788 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:20:11.048802 kernel: hv_utils: TimeSync IC version 4.0 Sep 6 01:20:11.048813 systemd-journald[1204]: Time jumped backwards, rotating. Sep 6 01:20:11.204400 systemd-networkd[1244]: lo: Link UP Sep 6 01:20:11.204658 systemd-networkd[1244]: lo: Gained carrier Sep 6 01:20:11.205196 systemd-networkd[1244]: Enumeration completed Sep 6 01:20:11.206813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:20:11.213037 systemd[1]: Started systemd-networkd.service. Sep 6 01:20:11.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.218686 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:20:11.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.225581 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:20:11.232223 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:20:11.234015 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:20:11.287292 kernel: mlx5_core f2c6:00:02.0 enP62150s1: Link up Sep 6 01:20:11.314300 kernel: hv_netvsc 000d3afd-f350-000d-3afd-f350000d3afd eth0: Data path switched to VF: enP62150s1 Sep 6 01:20:11.315089 systemd-networkd[1244]: enP62150s1: Link UP Sep 6 01:20:11.315477 systemd-networkd[1244]: eth0: Link UP Sep 6 01:20:11.315487 systemd-networkd[1244]: eth0: Gained carrier Sep 6 01:20:11.321777 systemd-networkd[1244]: enP62150s1: Gained carrier Sep 6 01:20:11.331398 systemd-networkd[1244]: eth0: DHCPv4 address 10.200.20.25/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:20:11.478353 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:20:11.517142 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:20:11.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.523185 systemd[1]: Reached target cryptsetup.target. Sep 6 01:20:11.529699 systemd[1]: Starting lvm2-activation.service... Sep 6 01:20:11.534576 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:20:11.552151 systemd[1]: Finished lvm2-activation.service. Sep 6 01:20:11.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.557352 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:20:11.562656 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:20:11.562683 systemd[1]: Reached target local-fs.target. Sep 6 01:20:11.567021 systemd[1]: Reached target machines.target. Sep 6 01:20:11.572526 systemd[1]: Starting ldconfig.service... Sep 6 01:20:11.576929 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:11.576995 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:11.578224 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:20:11.583992 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:20:11.591134 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:20:11.597394 systemd[1]: Starting systemd-sysext.service... Sep 6 01:20:11.628445 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1305 (bootctl) Sep 6 01:20:11.629818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:20:11.659811 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:20:11.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:11.793882 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:20:11.998117 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:20:11.998336 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:20:12.025592 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:20:12.026162 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:20:12.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.045294 kernel: loop0: detected capacity change from 0 to 207008 Sep 6 01:20:12.088300 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:20:12.104359 kernel: loop1: detected capacity change from 0 to 207008 Sep 6 01:20:12.108838 systemd-fsck[1312]: fsck.fat 4.2 (2021-01-31) Sep 6 01:20:12.108838 systemd-fsck[1312]: /dev/sda1: 236 files, 117310/258078 clusters Sep 6 01:20:12.111033 (sd-sysext)[1317]: Using extensions 'kubernetes'. Sep 6 01:20:12.111082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:20:12.111686 (sd-sysext)[1317]: Merged extensions into '/usr'. Sep 6 01:20:12.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.122377 systemd[1]: Mounting boot.mount... Sep 6 01:20:12.137073 systemd[1]: Mounted boot.mount. Sep 6 01:20:12.147787 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:20:12.152376 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.153701 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:12.159958 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:12.165101 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:12.168848 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.168972 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:12.171639 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:20:12.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.178931 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:20:12.183554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:12.183686 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:12.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.188558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:12.188676 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:12.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.193981 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:12.194172 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:12.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.199107 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:20:12.199204 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.200175 systemd[1]: Finished systemd-sysext.service. Sep 6 01:20:12.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.206020 systemd[1]: Starting ensure-sysext.service... Sep 6 01:20:12.210973 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:20:12.219660 systemd[1]: Reloading. Sep 6 01:20:12.235070 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:20:12.250520 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:20:12.266466 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:20:12.283415 /usr/lib/systemd/system-generators/torcx-generator[1350]: time="2025-09-06T01:20:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:20:12.283451 /usr/lib/systemd/system-generators/torcx-generator[1350]: time="2025-09-06T01:20:12Z" level=info msg="torcx already run" Sep 6 01:20:12.348912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:20:12.348932 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:20:12.361406 systemd-networkd[1244]: eth0: Gained IPv6LL Sep 6 01:20:12.365816 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:20:12.432000 audit: BPF prog-id=30 op=LOAD Sep 6 01:20:12.432000 audit: BPF prog-id=26 op=UNLOAD Sep 6 01:20:12.433000 audit: BPF prog-id=31 op=LOAD Sep 6 01:20:12.433000 audit: BPF prog-id=21 op=UNLOAD Sep 6 01:20:12.433000 audit: BPF prog-id=32 op=LOAD Sep 6 01:20:12.433000 audit: BPF prog-id=33 op=LOAD Sep 6 01:20:12.433000 audit: BPF prog-id=22 op=UNLOAD Sep 6 01:20:12.434000 audit: BPF prog-id=23 op=UNLOAD Sep 6 01:20:12.434000 audit: BPF prog-id=34 op=LOAD Sep 6 01:20:12.434000 audit: BPF prog-id=27 op=UNLOAD Sep 6 01:20:12.434000 audit: BPF prog-id=35 op=LOAD Sep 6 01:20:12.434000 audit: BPF prog-id=36 op=LOAD Sep 6 01:20:12.434000 audit: BPF prog-id=28 op=UNLOAD Sep 6 01:20:12.435000 audit: BPF prog-id=29 op=UNLOAD Sep 6 01:20:12.435000 audit: BPF prog-id=37 op=LOAD Sep 6 01:20:12.435000 audit: BPF prog-id=38 op=LOAD Sep 6 01:20:12.435000 audit: BPF prog-id=24 op=UNLOAD Sep 6 01:20:12.435000 audit: BPF prog-id=25 op=UNLOAD Sep 6 01:20:12.444515 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:20:12.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.458632 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.460027 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:12.465604 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:12.471014 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:12.474979 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.475100 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:12.475928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:12.476063 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:12.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.481068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:12.481186 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:12.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.486693 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:12.486804 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:12.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.492608 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.493828 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:12.498988 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:12.504214 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:12.508089 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.508241 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:12.509028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:12.509155 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:12.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.514349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:12.514467 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:12.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.519924 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:12.520037 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:12.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.527344 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.528679 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:20:12.534694 systemd[1]: Starting modprobe@drm.service... Sep 6 01:20:12.539871 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:20:12.545211 systemd[1]: Starting modprobe@loop.service... Sep 6 01:20:12.549457 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.549588 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:12.550557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:20:12.550685 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:20:12.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.555964 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:20:12.556082 systemd[1]: Finished modprobe@drm.service. Sep 6 01:20:12.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.560955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:20:12.561071 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:20:12.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.567148 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:20:12.567265 systemd[1]: Finished modprobe@loop.service. Sep 6 01:20:12.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.573031 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:20:12.573100 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:20:12.574095 systemd[1]: Finished ensure-sysext.service. Sep 6 01:20:12.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.887841 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:20:12.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.895245 systemd[1]: Starting audit-rules.service... Sep 6 01:20:12.901390 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:20:12.908177 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:20:12.915000 audit: BPF prog-id=39 op=LOAD Sep 6 01:20:12.917564 systemd[1]: Starting systemd-resolved.service... Sep 6 01:20:12.922000 audit: BPF prog-id=40 op=LOAD Sep 6 01:20:12.924490 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:20:12.933249 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:20:12.942077 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:20:12.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.948758 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:20:12.964000 audit[1424]: SYSTEM_BOOT pid=1424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 01:20:12.968587 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:20:12.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:13.026727 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:20:13.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:13.032388 systemd[1]: Reached target time-set.target. Sep 6 01:20:13.059133 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:20:13.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:13.088350 systemd-resolved[1422]: Positive Trust Anchors: Sep 6 01:20:13.088369 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:20:13.088399 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:20:13.131722 systemd-resolved[1422]: Using system hostname 'ci-3510.3.8-n-dced7724bc'. Sep 6 01:20:13.133909 systemd[1]: Started systemd-resolved.service. Sep 6 01:20:13.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:20:13.140579 systemd[1]: Reached target network.target. Sep 6 01:20:13.145565 systemd[1]: Reached target network-online.target. Sep 6 01:20:13.151015 systemd[1]: Reached target nss-lookup.target. Sep 6 01:20:13.275000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:20:13.275000 audit[1439]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdb3f2220 a2=420 a3=0 items=0 ppid=1418 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:20:13.275000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:20:13.292632 augenrules[1439]: No rules Sep 6 01:20:13.293619 systemd[1]: Finished audit-rules.service. Sep 6 01:20:13.322401 systemd-timesyncd[1423]: Contacted time server 141.11.228.173:123 (0.flatcar.pool.ntp.org). Sep 6 01:20:13.322756 systemd-timesyncd[1423]: Initial clock synchronization to Sat 2025-09-06 01:20:13.319595 UTC. Sep 6 01:20:18.606559 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:20:18.618657 systemd[1]: Finished ldconfig.service. Sep 6 01:20:18.625013 systemd[1]: Starting systemd-update-done.service... Sep 6 01:20:18.660680 systemd[1]: Finished systemd-update-done.service. Sep 6 01:20:18.665846 systemd[1]: Reached target sysinit.target. Sep 6 01:20:18.670600 systemd[1]: Started motdgen.path. Sep 6 01:20:18.674850 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:20:18.681475 systemd[1]: Started logrotate.timer. Sep 6 01:20:18.685751 systemd[1]: Started mdadm.timer. Sep 6 01:20:18.690583 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:20:18.695825 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:20:18.695859 systemd[1]: Reached target paths.target. Sep 6 01:20:18.700389 systemd[1]: Reached target timers.target. Sep 6 01:20:18.707915 systemd[1]: Listening on dbus.socket. Sep 6 01:20:18.713516 systemd[1]: Starting docker.socket... Sep 6 01:20:18.719954 systemd[1]: Listening on sshd.socket. Sep 6 01:20:18.724850 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:18.725363 systemd[1]: Listening on docker.socket. Sep 6 01:20:18.729970 systemd[1]: Reached target sockets.target. Sep 6 01:20:18.734845 systemd[1]: Reached target basic.target. Sep 6 01:20:18.740923 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:20:18.740953 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:20:18.742102 systemd[1]: Starting containerd.service... Sep 6 01:20:18.747374 systemd[1]: Starting dbus.service... Sep 6 01:20:18.752059 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:20:18.758122 systemd[1]: Starting extend-filesystems.service... Sep 6 01:20:18.762787 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:20:18.764131 systemd[1]: Starting kubelet.service... Sep 6 01:20:18.769253 systemd[1]: Starting motdgen.service... Sep 6 01:20:18.774099 systemd[1]: Started nvidia.service. Sep 6 01:20:18.780597 systemd[1]: Starting prepare-helm.service... Sep 6 01:20:18.785690 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:20:18.791795 systemd[1]: Starting sshd-keygen.service... Sep 6 01:20:18.798644 systemd[1]: Starting systemd-logind.service... Sep 6 01:20:18.802773 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:20:18.802832 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 01:20:18.803211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 01:20:18.804085 systemd[1]: Starting update-engine.service... Sep 6 01:20:18.808956 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:20:18.819678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:20:18.820358 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:20:18.854928 jq[1449]: false Sep 6 01:20:18.856179 jq[1467]: true Sep 6 01:20:18.863969 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:20:18.864148 systemd[1]: Finished motdgen.service. Sep 6 01:20:18.875923 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:20:18.876086 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:20:18.896116 extend-filesystems[1450]: Found loop1 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda1 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda2 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda3 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found usr Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda4 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda6 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda7 Sep 6 01:20:18.896116 extend-filesystems[1450]: Found sda9 Sep 6 01:20:18.896116 extend-filesystems[1450]: Checking size of /dev/sda9 Sep 6 01:20:19.066819 extend-filesystems[1450]: Old size kept for /dev/sda9 Sep 6 01:20:19.066819 extend-filesystems[1450]: Found sr0 Sep 6 01:20:19.143119 tar[1470]: linux-arm64/LICENSE Sep 6 01:20:19.143119 tar[1470]: linux-arm64/helm Sep 6 01:20:18.902342 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 6 01:20:19.023680 dbus-daemon[1448]: [system] SELinux support is enabled Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:18.979135060Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.086034411Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.086200910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100496848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100546562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100776814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100794732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100807490Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100818169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.100908278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:19.143707 env[1472]: time="2025-09-06T01:20:19.101126851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:20:19.144034 jq[1477]: true Sep 6 01:20:18.904438 systemd-logind[1463]: New seat seat0. Sep 6 01:20:19.121646 dbus-daemon[1448]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.101254436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.101279992Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.101329546Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.101340345Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129340092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129382847Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129396526Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129444720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129460358Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129474516Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129536269Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129889865Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.146416 env[1472]: time="2025-09-06T01:20:19.129907623Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.146885 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:20:18.989792 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.129921342Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.129933740Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.129946579Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130081602Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130151873Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130412642Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130438519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130450717Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130496152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130508750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130520509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130531867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130543266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147037 env[1472]: time="2025-09-06T01:20:19.130600939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:20:18.989950 systemd[1]: Finished extend-filesystems.service. Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130614577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130630575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130643694Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130762399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130778077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130790236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130801674Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130847069Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130859987Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130877625Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:20:19.147432 env[1472]: time="2025-09-06T01:20:19.130910181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:20:19.023831 systemd[1]: Started dbus.service. Sep 6 01:20:19.044212 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.131098518Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.131149072Z" level=info msg="Connect containerd service" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.131175029Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.131709044Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.131928217Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.131964893Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.132003528Z" level=info msg="containerd successfully booted in 0.159237s" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.138134141Z" level=info msg="Start subscribing containerd event" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.138189414Z" level=info msg="Start recovering state" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.138258365Z" level=info msg="Start event monitor" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.138290322Z" level=info msg="Start snapshots syncer" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.138305240Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:20:19.147732 env[1472]: time="2025-09-06T01:20:19.138313239Z" level=info msg="Start streaming server" Sep 6 01:20:19.044252 systemd[1]: Reached target system-config.target. Sep 6 01:20:19.095628 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:20:19.095660 systemd[1]: Reached target user-config.target. Sep 6 01:20:19.111251 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:20:19.122376 systemd[1]: Started systemd-logind.service. Sep 6 01:20:19.148308 systemd[1]: Started containerd.service. Sep 6 01:20:19.173607 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 01:20:19.447265 update_engine[1465]: I0906 01:20:19.434469 1465 main.cc:92] Flatcar Update Engine starting Sep 6 01:20:19.498956 systemd[1]: Started update-engine.service. Sep 6 01:20:19.499250 update_engine[1465]: I0906 01:20:19.499006 1465 update_check_scheduler.cc:74] Next update check in 2m33s Sep 6 01:20:19.505092 systemd[1]: Started locksmithd.service. Sep 6 01:20:19.698780 tar[1470]: linux-arm64/README.md Sep 6 01:20:19.712583 systemd[1]: Finished prepare-helm.service. Sep 6 01:20:19.807159 systemd[1]: Started kubelet.service. Sep 6 01:20:20.209327 kubelet[1553]: E0906 01:20:20.209253 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:20.211034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:20.211173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:20.476902 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:20:20.493592 systemd[1]: Finished sshd-keygen.service. Sep 6 01:20:20.499867 systemd[1]: Starting issuegen.service... Sep 6 01:20:20.505432 systemd[1]: Started waagent.service. Sep 6 01:20:20.511129 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:20:20.511323 systemd[1]: Finished issuegen.service. Sep 6 01:20:20.517009 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:20:20.556402 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:20:20.563168 systemd[1]: Started getty@tty1.service. Sep 6 01:20:20.569064 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 01:20:20.577758 systemd[1]: Reached target getty.target. Sep 6 01:20:20.581788 systemd[1]: Reached target multi-user.target. Sep 6 01:20:20.587668 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:20:20.601018 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:20:20.601172 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:20:20.606583 systemd[1]: Startup finished in 795ms (kernel) + 13.375s (initrd) + 20.025s (userspace) = 34.196s. Sep 6 01:20:20.932164 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:20:21.163816 login[1577]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 6 01:20:21.164199 login[1576]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:20:21.220580 systemd[1]: Created slice user-500.slice. Sep 6 01:20:21.221730 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:20:21.224166 systemd-logind[1463]: New session 1 of user core. Sep 6 01:20:21.268197 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:20:21.269686 systemd[1]: Starting user@500.service... Sep 6 01:20:21.300788 (systemd)[1580]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:21.519771 systemd[1580]: Queued start job for default target default.target. Sep 6 01:20:21.520256 systemd[1580]: Reached target paths.target. Sep 6 01:20:21.520295 systemd[1580]: Reached target sockets.target. Sep 6 01:20:21.520306 systemd[1580]: Reached target timers.target. Sep 6 01:20:21.520315 systemd[1580]: Reached target basic.target. Sep 6 01:20:21.520415 systemd[1]: Started user@500.service. Sep 6 01:20:21.521247 systemd[1]: Started session-1.scope. Sep 6 01:20:21.521265 systemd[1580]: Reached target default.target. Sep 6 01:20:21.521328 systemd[1580]: Startup finished in 214ms. Sep 6 01:20:22.165397 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:20:22.169251 systemd-logind[1463]: New session 2 of user core. Sep 6 01:20:22.169692 systemd[1]: Started session-2.scope. Sep 6 01:20:27.008038 waagent[1574]: 2025-09-06T01:20:27.007920Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 6 01:20:27.015480 waagent[1574]: 2025-09-06T01:20:27.015385Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 6 01:20:27.020398 waagent[1574]: 2025-09-06T01:20:27.020324Z INFO Daemon Daemon Python: 3.9.16 Sep 6 01:20:27.025829 waagent[1574]: 2025-09-06T01:20:27.025743Z INFO Daemon Daemon Run daemon Sep 6 01:20:27.030580 waagent[1574]: 2025-09-06T01:20:27.030508Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 6 01:20:27.050939 waagent[1574]: 2025-09-06T01:20:27.050791Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:20:27.072588 waagent[1574]: 2025-09-06T01:20:27.072444Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:20:27.083051 waagent[1574]: 2025-09-06T01:20:27.082966Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:20:27.088570 waagent[1574]: 2025-09-06T01:20:27.088489Z INFO Daemon Daemon Using waagent for provisioning Sep 6 01:20:27.094664 waagent[1574]: 2025-09-06T01:20:27.094592Z INFO Daemon Daemon Activate resource disk Sep 6 01:20:27.099709 waagent[1574]: 2025-09-06T01:20:27.099642Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 6 01:20:27.114829 waagent[1574]: 2025-09-06T01:20:27.114742Z INFO Daemon Daemon Found device: None Sep 6 01:20:27.120057 waagent[1574]: 2025-09-06T01:20:27.119978Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 6 01:20:27.132437 waagent[1574]: 2025-09-06T01:20:27.132353Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 6 01:20:27.145963 waagent[1574]: 2025-09-06T01:20:27.145887Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:20:27.152584 waagent[1574]: 2025-09-06T01:20:27.152511Z INFO Daemon Daemon Running default provisioning handler Sep 6 01:20:27.166399 waagent[1574]: 2025-09-06T01:20:27.166218Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 6 01:20:27.182369 waagent[1574]: 2025-09-06T01:20:27.182199Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 6 01:20:27.192821 waagent[1574]: 2025-09-06T01:20:27.192740Z INFO Daemon Daemon cloud-init is enabled: False Sep 6 01:20:27.198007 waagent[1574]: 2025-09-06T01:20:27.197924Z INFO Daemon Daemon Copying ovf-env.xml Sep 6 01:20:27.262449 waagent[1574]: 2025-09-06T01:20:27.262237Z INFO Daemon Daemon Successfully mounted dvd Sep 6 01:20:27.338859 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 6 01:20:27.391894 waagent[1574]: 2025-09-06T01:20:27.391741Z INFO Daemon Daemon Detect protocol endpoint Sep 6 01:20:27.397535 waagent[1574]: 2025-09-06T01:20:27.397450Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 6 01:20:27.403593 waagent[1574]: 2025-09-06T01:20:27.403512Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 6 01:20:27.410485 waagent[1574]: 2025-09-06T01:20:27.410418Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 6 01:20:27.416094 waagent[1574]: 2025-09-06T01:20:27.416031Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 6 01:20:27.421538 waagent[1574]: 2025-09-06T01:20:27.421475Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 6 01:20:27.561592 waagent[1574]: 2025-09-06T01:20:27.561469Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 6 01:20:27.570647 waagent[1574]: 2025-09-06T01:20:27.570597Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 6 01:20:27.576586 waagent[1574]: 2025-09-06T01:20:27.576510Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 6 01:20:29.534164 waagent[1574]: 2025-09-06T01:20:29.533992Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 6 01:20:29.550503 waagent[1574]: 2025-09-06T01:20:29.550416Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 6 01:20:29.558059 waagent[1574]: 2025-09-06T01:20:29.557981Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 6 01:20:29.639563 waagent[1574]: 2025-09-06T01:20:29.639385Z INFO Daemon Daemon Found private key matching thumbprint A59E7575516E528C273A20C56A479DEA274BD4C7 Sep 6 01:20:29.649596 waagent[1574]: 2025-09-06T01:20:29.649506Z INFO Daemon Daemon Fetch goal state completed Sep 6 01:20:29.704367 waagent[1574]: 2025-09-06T01:20:29.704297Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: bc02f114-f911-4058-9979-b9966ecd8905 New eTag: 9848096570108386631] Sep 6 01:20:29.716357 waagent[1574]: 2025-09-06T01:20:29.716259Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:20:29.733065 waagent[1574]: 2025-09-06T01:20:29.732995Z INFO Daemon Daemon Starting provisioning Sep 6 01:20:29.738794 waagent[1574]: 2025-09-06T01:20:29.738711Z INFO Daemon Daemon Handle ovf-env.xml. Sep 6 01:20:29.744281 waagent[1574]: 2025-09-06T01:20:29.744206Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-dced7724bc] Sep 6 01:20:29.795839 waagent[1574]: 2025-09-06T01:20:29.795695Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-dced7724bc] Sep 6 01:20:29.802842 waagent[1574]: 2025-09-06T01:20:29.802758Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 6 01:20:29.810066 waagent[1574]: 2025-09-06T01:20:29.810001Z INFO Daemon Daemon Primary interface is [eth0] Sep 6 01:20:29.827027 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 6 01:20:29.827193 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 6 01:20:29.827249 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 6 01:20:29.827508 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:20:29.832324 systemd-networkd[1244]: eth0: DHCPv6 lease lost Sep 6 01:20:29.834033 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:20:29.834204 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:20:29.836509 systemd[1]: Starting systemd-networkd.service... Sep 6 01:20:29.866514 systemd-networkd[1623]: enP62150s1: Link UP Sep 6 01:20:29.866523 systemd-networkd[1623]: enP62150s1: Gained carrier Sep 6 01:20:29.867557 systemd-networkd[1623]: eth0: Link UP Sep 6 01:20:29.867566 systemd-networkd[1623]: eth0: Gained carrier Sep 6 01:20:29.867935 systemd-networkd[1623]: lo: Link UP Sep 6 01:20:29.867943 systemd-networkd[1623]: lo: Gained carrier Sep 6 01:20:29.868180 systemd-networkd[1623]: eth0: Gained IPv6LL Sep 6 01:20:29.869496 systemd-networkd[1623]: Enumeration completed Sep 6 01:20:29.869633 systemd[1]: Started systemd-networkd.service. Sep 6 01:20:29.871385 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:20:29.872580 systemd-networkd[1623]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:20:29.877818 waagent[1574]: 2025-09-06T01:20:29.877660Z INFO Daemon Daemon Create user account if not exists Sep 6 01:20:29.885210 waagent[1574]: 2025-09-06T01:20:29.885085Z INFO Daemon Daemon User core already exists, skip useradd Sep 6 01:20:29.891815 waagent[1574]: 2025-09-06T01:20:29.891732Z INFO Daemon Daemon Configure sudoer Sep 6 01:20:29.897207 waagent[1574]: 2025-09-06T01:20:29.897134Z INFO Daemon Daemon Configure sshd Sep 6 01:20:29.898381 systemd-networkd[1623]: eth0: DHCPv4 address 10.200.20.25/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 6 01:20:29.900571 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:20:29.906130 waagent[1574]: 2025-09-06T01:20:29.906026Z INFO Daemon Daemon Deploy ssh public key. Sep 6 01:20:30.317314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:20:30.317482 systemd[1]: Stopped kubelet.service. Sep 6 01:20:30.318911 systemd[1]: Starting kubelet.service... Sep 6 01:20:30.410172 systemd[1]: Started kubelet.service. Sep 6 01:20:30.567934 kubelet[1633]: E0906 01:20:30.567850 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:30.570537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:30.570653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:31.269379 waagent[1574]: 2025-09-06T01:20:31.269294Z INFO Daemon Daemon Provisioning complete Sep 6 01:20:31.290130 waagent[1574]: 2025-09-06T01:20:31.290061Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 6 01:20:31.297624 waagent[1574]: 2025-09-06T01:20:31.297542Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 6 01:20:31.310041 waagent[1574]: 2025-09-06T01:20:31.309963Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 6 01:20:31.611647 waagent[1638]: 2025-09-06T01:20:31.611553Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 6 01:20:31.612727 waagent[1638]: 2025-09-06T01:20:31.612674Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:31.612961 waagent[1638]: 2025-09-06T01:20:31.612913Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:31.625855 waagent[1638]: 2025-09-06T01:20:31.625780Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 6 01:20:31.626173 waagent[1638]: 2025-09-06T01:20:31.626125Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 6 01:20:31.683672 waagent[1638]: 2025-09-06T01:20:31.683545Z INFO ExtHandler ExtHandler Found private key matching thumbprint A59E7575516E528C273A20C56A479DEA274BD4C7 Sep 6 01:20:31.684096 waagent[1638]: 2025-09-06T01:20:31.684046Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 6 01:20:31.698743 waagent[1638]: 2025-09-06T01:20:31.698687Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 3cf948cf-7e45-4fc5-ae1a-c8665b935304 New eTag: 9848096570108386631] Sep 6 01:20:31.699451 waagent[1638]: 2025-09-06T01:20:31.699395Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 6 01:20:31.754349 waagent[1638]: 2025-09-06T01:20:31.754178Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:20:31.764715 waagent[1638]: 2025-09-06T01:20:31.764628Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1638 Sep 6 01:20:31.768648 waagent[1638]: 2025-09-06T01:20:31.768574Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:20:31.770063 waagent[1638]: 2025-09-06T01:20:31.770004Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 6 01:20:31.890986 waagent[1638]: 2025-09-06T01:20:31.890876Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:20:31.891590 waagent[1638]: 2025-09-06T01:20:31.891534Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:20:31.899535 waagent[1638]: 2025-09-06T01:20:31.899479Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:20:31.900172 waagent[1638]: 2025-09-06T01:20:31.900115Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:20:31.901463 waagent[1638]: 2025-09-06T01:20:31.901399Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 6 01:20:31.902920 waagent[1638]: 2025-09-06T01:20:31.902853Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:20:31.903226 waagent[1638]: 2025-09-06T01:20:31.903156Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:31.903941 waagent[1638]: 2025-09-06T01:20:31.903864Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:31.904603 waagent[1638]: 2025-09-06T01:20:31.904529Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:20:31.905105 waagent[1638]: 2025-09-06T01:20:31.905043Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:20:31.905586 waagent[1638]: 2025-09-06T01:20:31.905507Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:20:31.905586 waagent[1638]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:20:31.905586 waagent[1638]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:20:31.905586 waagent[1638]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:20:31.905586 waagent[1638]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:31.905586 waagent[1638]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:31.905586 waagent[1638]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:31.906500 waagent[1638]: 2025-09-06T01:20:31.906428Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:31.908094 waagent[1638]: 2025-09-06T01:20:31.907907Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:20:31.908628 waagent[1638]: 2025-09-06T01:20:31.908547Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:20:31.908931 waagent[1638]: 2025-09-06T01:20:31.908867Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:31.910215 waagent[1638]: 2025-09-06T01:20:31.910138Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:20:31.910404 waagent[1638]: 2025-09-06T01:20:31.910350Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:20:31.910523 waagent[1638]: 2025-09-06T01:20:31.910479Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:20:31.911496 waagent[1638]: 2025-09-06T01:20:31.911433Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:20:31.911663 waagent[1638]: 2025-09-06T01:20:31.911590Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:20:31.911939 waagent[1638]: 2025-09-06T01:20:31.911872Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:20:31.926973 waagent[1638]: 2025-09-06T01:20:31.926901Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 6 01:20:31.928621 waagent[1638]: 2025-09-06T01:20:31.928566Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:20:31.929700 waagent[1638]: 2025-09-06T01:20:31.929645Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 6 01:20:31.955636 waagent[1638]: 2025-09-06T01:20:31.955539Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 6 01:20:31.956083 waagent[1638]: 2025-09-06T01:20:31.956016Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1623' Sep 6 01:20:32.052777 waagent[1638]: 2025-09-06T01:20:32.052602Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:20:32.052777 waagent[1638]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:20:32.052777 waagent[1638]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:20:32.052777 waagent[1638]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:f3:50 brd ff:ff:ff:ff:ff:ff Sep 6 01:20:32.052777 waagent[1638]: 3: enP62150s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:f3:50 brd ff:ff:ff:ff:ff:ff\ altname enP62150p0s2 Sep 6 01:20:32.052777 waagent[1638]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:20:32.052777 waagent[1638]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:20:32.052777 waagent[1638]: 2: eth0 inet 10.200.20.25/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:20:32.052777 waagent[1638]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:20:32.052777 waagent[1638]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:20:32.052777 waagent[1638]: 2: eth0 inet6 fe80::20d:3aff:fefd:f350/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:20:32.220237 waagent[1638]: 2025-09-06T01:20:32.220130Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 6 01:20:32.313580 waagent[1574]: 2025-09-06T01:20:32.313455Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 6 01:20:32.319068 waagent[1574]: 2025-09-06T01:20:32.319009Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 6 01:20:33.603428 waagent[1667]: 2025-09-06T01:20:33.603328Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 6 01:20:33.604113 waagent[1667]: 2025-09-06T01:20:33.604047Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 6 01:20:33.604265 waagent[1667]: 2025-09-06T01:20:33.604209Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 6 01:20:33.604427 waagent[1667]: 2025-09-06T01:20:33.604380Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 6 01:20:33.618424 waagent[1667]: 2025-09-06T01:20:33.618301Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 6 01:20:33.618869 waagent[1667]: 2025-09-06T01:20:33.618811Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:33.619035 waagent[1667]: 2025-09-06T01:20:33.618989Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:33.619295 waagent[1667]: 2025-09-06T01:20:33.619220Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 6 01:20:33.632940 waagent[1667]: 2025-09-06T01:20:33.632869Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 6 01:20:33.641881 waagent[1667]: 2025-09-06T01:20:33.641825Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 6 01:20:33.642921 waagent[1667]: 2025-09-06T01:20:33.642865Z INFO ExtHandler Sep 6 01:20:33.643088 waagent[1667]: 2025-09-06T01:20:33.643039Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e8934771-17a9-4727-96ec-95fdb9b6352a eTag: 9848096570108386631 source: Fabric] Sep 6 01:20:33.643846 waagent[1667]: 2025-09-06T01:20:33.643790Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 6 01:20:33.645104 waagent[1667]: 2025-09-06T01:20:33.645046Z INFO ExtHandler Sep 6 01:20:33.645287 waagent[1667]: 2025-09-06T01:20:33.645208Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 6 01:20:33.652350 waagent[1667]: 2025-09-06T01:20:33.652300Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 6 01:20:33.652872 waagent[1667]: 2025-09-06T01:20:33.652824Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 6 01:20:33.672429 waagent[1667]: 2025-09-06T01:20:33.672373Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 6 01:20:33.739032 waagent[1667]: 2025-09-06T01:20:33.738900Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A59E7575516E528C273A20C56A479DEA274BD4C7', 'hasPrivateKey': True} Sep 6 01:20:33.740450 waagent[1667]: 2025-09-06T01:20:33.740390Z INFO ExtHandler Fetch goal state from WireServer completed Sep 6 01:20:33.741421 waagent[1667]: 2025-09-06T01:20:33.741364Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 6 01:20:33.762400 waagent[1667]: 2025-09-06T01:20:33.762266Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 6 01:20:33.770775 waagent[1667]: 2025-09-06T01:20:33.770667Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:20:33.774530 waagent[1667]: 2025-09-06T01:20:33.774414Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 6 01:20:33.774755 waagent[1667]: 2025-09-06T01:20:33.774703Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 6 01:20:33.912113 waagent[1667]: 2025-09-06T01:20:33.911939Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Sep 6 01:20:33.912113 waagent[1667]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:33.912113 waagent[1667]: pkts bytes target prot opt in out source destination Sep 6 01:20:33.912113 waagent[1667]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:33.912113 waagent[1667]: pkts bytes target prot opt in out source destination Sep 6 01:20:33.912113 waagent[1667]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 6 01:20:33.912113 waagent[1667]: pkts bytes target prot opt in out source destination Sep 6 01:20:33.912113 waagent[1667]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 6 01:20:33.912113 waagent[1667]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 6 01:20:33.912113 waagent[1667]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 6 01:20:33.913196 waagent[1667]: 2025-09-06T01:20:33.913132Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 6 01:20:33.916043 waagent[1667]: 2025-09-06T01:20:33.915931Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 6 01:20:33.916345 waagent[1667]: 2025-09-06T01:20:33.916262Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 6 01:20:33.916754 waagent[1667]: 2025-09-06T01:20:33.916695Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 6 01:20:33.924763 waagent[1667]: 2025-09-06T01:20:33.924704Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 6 01:20:33.925243 waagent[1667]: 2025-09-06T01:20:33.925187Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 6 01:20:33.933521 waagent[1667]: 2025-09-06T01:20:33.933456Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1667 Sep 6 01:20:33.936870 waagent[1667]: 2025-09-06T01:20:33.936805Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 6 01:20:33.937737 waagent[1667]: 2025-09-06T01:20:33.937681Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 6 01:20:33.938667 waagent[1667]: 2025-09-06T01:20:33.938611Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 6 01:20:33.941466 waagent[1667]: 2025-09-06T01:20:33.941407Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 6 01:20:33.941814 waagent[1667]: 2025-09-06T01:20:33.941762Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 6 01:20:33.943175 waagent[1667]: 2025-09-06T01:20:33.943107Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 6 01:20:33.943716 waagent[1667]: 2025-09-06T01:20:33.943659Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:33.943980 waagent[1667]: 2025-09-06T01:20:33.943932Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:33.944665 waagent[1667]: 2025-09-06T01:20:33.944604Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 6 01:20:33.945085 waagent[1667]: 2025-09-06T01:20:33.944978Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 6 01:20:33.945412 waagent[1667]: 2025-09-06T01:20:33.945348Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 6 01:20:33.945799 waagent[1667]: 2025-09-06T01:20:33.945740Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 6 01:20:33.946135 waagent[1667]: 2025-09-06T01:20:33.946076Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 6 01:20:33.946135 waagent[1667]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 6 01:20:33.946135 waagent[1667]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 6 01:20:33.946135 waagent[1667]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 6 01:20:33.946135 waagent[1667]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:33.946135 waagent[1667]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:33.946135 waagent[1667]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 6 01:20:33.946587 waagent[1667]: 2025-09-06T01:20:33.946524Z INFO EnvHandler ExtHandler Configure routes Sep 6 01:20:33.946846 waagent[1667]: 2025-09-06T01:20:33.946785Z INFO EnvHandler ExtHandler Gateway:None Sep 6 01:20:33.947412 waagent[1667]: 2025-09-06T01:20:33.947349Z INFO EnvHandler ExtHandler Routes:None Sep 6 01:20:33.950648 waagent[1667]: 2025-09-06T01:20:33.950580Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 6 01:20:33.951240 waagent[1667]: 2025-09-06T01:20:33.951165Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 6 01:20:33.954375 waagent[1667]: 2025-09-06T01:20:33.954224Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 6 01:20:33.954811 waagent[1667]: 2025-09-06T01:20:33.954750Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 6 01:20:33.955095 waagent[1667]: 2025-09-06T01:20:33.955031Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 6 01:20:33.967986 waagent[1667]: 2025-09-06T01:20:33.967902Z INFO MonitorHandler ExtHandler Network interfaces: Sep 6 01:20:33.967986 waagent[1667]: Executing ['ip', '-a', '-o', 'link']: Sep 6 01:20:33.967986 waagent[1667]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 6 01:20:33.967986 waagent[1667]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:f3:50 brd ff:ff:ff:ff:ff:ff Sep 6 01:20:33.967986 waagent[1667]: 3: enP62150s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fd:f3:50 brd ff:ff:ff:ff:ff:ff\ altname enP62150p0s2 Sep 6 01:20:33.967986 waagent[1667]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 6 01:20:33.967986 waagent[1667]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 6 01:20:33.967986 waagent[1667]: 2: eth0 inet 10.200.20.25/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 6 01:20:33.967986 waagent[1667]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 6 01:20:33.967986 waagent[1667]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 6 01:20:33.967986 waagent[1667]: 2: eth0 inet6 fe80::20d:3aff:fefd:f350/64 scope link \ valid_lft forever preferred_lft forever Sep 6 01:20:33.978458 waagent[1667]: 2025-09-06T01:20:33.978374Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 6 01:20:33.996459 waagent[1667]: 2025-09-06T01:20:33.996385Z INFO ExtHandler ExtHandler Sep 6 01:20:33.998196 waagent[1667]: 2025-09-06T01:20:33.998127Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dcd91810-0f53-40a5-a63a-9f810688ea4f correlation fc154af4-77bf-41e3-99d7-bb8cd807e210 created: 2025-09-06T01:19:04.853745Z] Sep 6 01:20:34.001946 waagent[1667]: 2025-09-06T01:20:34.001883Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 6 01:20:34.006743 waagent[1667]: 2025-09-06T01:20:34.006687Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Sep 6 01:20:34.017719 waagent[1667]: 2025-09-06T01:20:34.017589Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 6 01:20:34.039509 waagent[1667]: 2025-09-06T01:20:34.039432Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 6 01:20:34.042708 waagent[1667]: 2025-09-06T01:20:34.042603Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 6 01:20:34.046939 waagent[1667]: 2025-09-06T01:20:34.046875Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B4A3D8BF-03F5-4FBF-9950-1C74A5860616;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 6 01:20:40.817303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:20:40.817474 systemd[1]: Stopped kubelet.service. Sep 6 01:20:40.818826 systemd[1]: Starting kubelet.service... Sep 6 01:20:40.910025 systemd[1]: Started kubelet.service. Sep 6 01:20:40.998941 kubelet[1712]: E0906 01:20:40.998883 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:41.001009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:41.001124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:51.067314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 6 01:20:51.067482 systemd[1]: Stopped kubelet.service. Sep 6 01:20:51.068838 systemd[1]: Starting kubelet.service... Sep 6 01:20:51.169797 systemd[1]: Started kubelet.service. Sep 6 01:20:51.303300 kubelet[1721]: E0906 01:20:51.303237 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:20:51.305702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:20:51.305820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:20:53.145679 systemd[1]: Created slice system-sshd.slice. Sep 6 01:20:53.147143 systemd[1]: Started sshd@0-10.200.20.25:22-10.200.16.10:56786.service. Sep 6 01:20:53.741484 sshd[1727]: Accepted publickey for core from 10.200.16.10 port 56786 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:53.756201 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:53.760641 systemd[1]: Started session-3.scope. Sep 6 01:20:53.761033 systemd-logind[1463]: New session 3 of user core. Sep 6 01:20:54.156072 systemd[1]: Started sshd@1-10.200.20.25:22-10.200.16.10:56800.service. Sep 6 01:20:54.610349 sshd[1732]: Accepted publickey for core from 10.200.16.10 port 56800 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:54.611878 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:54.616001 systemd[1]: Started session-4.scope. Sep 6 01:20:54.617131 systemd-logind[1463]: New session 4 of user core. Sep 6 01:20:54.952327 sshd[1732]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:54.954753 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:20:54.954920 systemd[1]: sshd@1-10.200.20.25:22-10.200.16.10:56800.service: Deactivated successfully. Sep 6 01:20:54.955609 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:20:54.956251 systemd-logind[1463]: Removed session 4. Sep 6 01:20:55.033879 systemd[1]: Started sshd@2-10.200.20.25:22-10.200.16.10:56808.service. Sep 6 01:20:55.488317 sshd[1738]: Accepted publickey for core from 10.200.16.10 port 56808 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:55.489807 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:55.493317 systemd-logind[1463]: New session 5 of user core. Sep 6 01:20:55.493859 systemd[1]: Started session-5.scope. Sep 6 01:20:55.827263 sshd[1738]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:55.829558 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:20:55.830066 systemd[1]: sshd@2-10.200.20.25:22-10.200.16.10:56808.service: Deactivated successfully. Sep 6 01:20:55.830933 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:20:55.831641 systemd-logind[1463]: Removed session 5. Sep 6 01:20:55.895537 systemd[1]: Started sshd@3-10.200.20.25:22-10.200.16.10:56812.service. Sep 6 01:20:56.346724 sshd[1744]: Accepted publickey for core from 10.200.16.10 port 56812 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:56.348358 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:56.352549 systemd[1]: Started session-6.scope. Sep 6 01:20:56.353088 systemd-logind[1463]: New session 6 of user core. Sep 6 01:20:56.692655 sshd[1744]: pam_unix(sshd:session): session closed for user core Sep 6 01:20:56.695113 systemd[1]: sshd@3-10.200.20.25:22-10.200.16.10:56812.service: Deactivated successfully. Sep 6 01:20:56.695784 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:20:56.696286 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:20:56.697076 systemd-logind[1463]: Removed session 6. Sep 6 01:20:56.760983 systemd[1]: Started sshd@4-10.200.20.25:22-10.200.16.10:56822.service. Sep 6 01:20:57.174677 sshd[1750]: Accepted publickey for core from 10.200.16.10 port 56822 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:20:57.175901 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:20:57.179716 systemd-logind[1463]: New session 7 of user core. Sep 6 01:20:57.180100 systemd[1]: Started session-7.scope. Sep 6 01:20:57.668731 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:20:57.668952 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:20:57.688189 systemd[1]: Starting docker.service... Sep 6 01:20:57.740496 env[1763]: time="2025-09-06T01:20:57.740446533Z" level=info msg="Starting up" Sep 6 01:20:57.741845 env[1763]: time="2025-09-06T01:20:57.741821199Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:20:57.741941 env[1763]: time="2025-09-06T01:20:57.741927718Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:20:57.742022 env[1763]: time="2025-09-06T01:20:57.742006917Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:20:57.742073 env[1763]: time="2025-09-06T01:20:57.742061637Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:20:57.744048 env[1763]: time="2025-09-06T01:20:57.744023976Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:20:57.744138 env[1763]: time="2025-09-06T01:20:57.744124975Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:20:57.744202 env[1763]: time="2025-09-06T01:20:57.744188094Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:20:57.744258 env[1763]: time="2025-09-06T01:20:57.744245694Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:20:57.830104 env[1763]: time="2025-09-06T01:20:57.830069273Z" level=info msg="Loading containers: start." Sep 6 01:20:58.000300 kernel: Initializing XFRM netlink socket Sep 6 01:20:58.021499 env[1763]: time="2025-09-06T01:20:58.021455639Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:20:58.170979 systemd-networkd[1623]: docker0: Link UP Sep 6 01:20:58.201586 env[1763]: time="2025-09-06T01:20:58.201552548Z" level=info msg="Loading containers: done." Sep 6 01:20:58.211245 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2561686738-merged.mount: Deactivated successfully. Sep 6 01:20:58.222096 env[1763]: time="2025-09-06T01:20:58.222055347Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:20:58.222437 env[1763]: time="2025-09-06T01:20:58.222420263Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:20:58.222609 env[1763]: time="2025-09-06T01:20:58.222591221Z" level=info msg="Daemon has completed initialization" Sep 6 01:20:58.254299 systemd[1]: Started docker.service. Sep 6 01:20:58.259904 env[1763]: time="2025-09-06T01:20:58.259838135Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:20:59.064289 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 6 01:21:01.317267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 6 01:21:01.317465 systemd[1]: Stopped kubelet.service. Sep 6 01:21:01.318847 systemd[1]: Starting kubelet.service... Sep 6 01:21:01.407752 systemd[1]: Started kubelet.service. Sep 6 01:21:01.558043 env[1472]: time="2025-09-06T01:21:01.557807267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 6 01:21:01.890604 kubelet[1882]: E0906 01:21:01.569038 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:01.570901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:01.571025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:02.693322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540915651.mount: Deactivated successfully. Sep 6 01:21:04.413823 update_engine[1465]: I0906 01:21:04.413777 1465 update_attempter.cc:509] Updating boot flags... Sep 6 01:21:05.015030 env[1472]: time="2025-09-06T01:21:05.014960629Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:05.022855 env[1472]: time="2025-09-06T01:21:05.022813340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:05.028092 env[1472]: time="2025-09-06T01:21:05.028055307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:05.033506 env[1472]: time="2025-09-06T01:21:05.033471793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:05.034162 env[1472]: time="2025-09-06T01:21:05.034133029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 6 01:21:05.034982 env[1472]: time="2025-09-06T01:21:05.034960184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 6 01:21:07.238833 env[1472]: time="2025-09-06T01:21:07.238784157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:07.246021 env[1472]: time="2025-09-06T01:21:07.245959917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:07.250682 env[1472]: time="2025-09-06T01:21:07.250641252Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:07.258758 env[1472]: time="2025-09-06T01:21:07.258713247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:07.260175 env[1472]: time="2025-09-06T01:21:07.260120439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 6 01:21:07.260834 env[1472]: time="2025-09-06T01:21:07.260803116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 6 01:21:08.946986 env[1472]: time="2025-09-06T01:21:08.946920124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:08.953958 env[1472]: time="2025-09-06T01:21:08.953917528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:08.960058 env[1472]: time="2025-09-06T01:21:08.960021697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:08.965850 env[1472]: time="2025-09-06T01:21:08.965812307Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:08.966854 env[1472]: time="2025-09-06T01:21:08.966828662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 6 01:21:08.968190 env[1472]: time="2025-09-06T01:21:08.968164575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 01:21:10.141051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895447801.mount: Deactivated successfully. Sep 6 01:21:10.903157 env[1472]: time="2025-09-06T01:21:10.903108840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:10.912586 env[1472]: time="2025-09-06T01:21:10.912522358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:10.917424 env[1472]: time="2025-09-06T01:21:10.917390536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:10.922210 env[1472]: time="2025-09-06T01:21:10.922152954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:10.922794 env[1472]: time="2025-09-06T01:21:10.922764591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 6 01:21:10.923588 env[1472]: time="2025-09-06T01:21:10.923567108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 01:21:11.527747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2614470149.mount: Deactivated successfully. Sep 6 01:21:11.817370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 6 01:21:11.817531 systemd[1]: Stopped kubelet.service. Sep 6 01:21:11.818915 systemd[1]: Starting kubelet.service... Sep 6 01:21:11.962551 systemd[1]: Started kubelet.service. Sep 6 01:21:12.114508 kubelet[1935]: E0906 01:21:12.114389 1935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:12.116556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:12.116671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:13.796443 env[1472]: time="2025-09-06T01:21:13.796137552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:13.805389 env[1472]: time="2025-09-06T01:21:13.805339078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:13.811328 env[1472]: time="2025-09-06T01:21:13.811135416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:13.816563 env[1472]: time="2025-09-06T01:21:13.816510156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:13.817677 env[1472]: time="2025-09-06T01:21:13.817642272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 6 01:21:13.818615 env[1472]: time="2025-09-06T01:21:13.818585148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:21:15.927111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190795942.mount: Deactivated successfully. Sep 6 01:21:15.959638 env[1472]: time="2025-09-06T01:21:15.959598174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:15.969064 env[1472]: time="2025-09-06T01:21:15.969025543Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:15.974949 env[1472]: time="2025-09-06T01:21:15.974917883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:15.984342 env[1472]: time="2025-09-06T01:21:15.984296933Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:15.985005 env[1472]: time="2025-09-06T01:21:15.984975490Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 01:21:15.985630 env[1472]: time="2025-09-06T01:21:15.985606008Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 6 01:21:16.606400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905460941.mount: Deactivated successfully. Sep 6 01:21:19.136259 env[1472]: time="2025-09-06T01:21:19.136194724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:19.144392 env[1472]: time="2025-09-06T01:21:19.144349624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:19.149616 env[1472]: time="2025-09-06T01:21:19.149566171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:19.154365 env[1472]: time="2025-09-06T01:21:19.154317359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:19.155243 env[1472]: time="2025-09-06T01:21:19.155211277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 6 01:21:22.317297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 6 01:21:22.317458 systemd[1]: Stopped kubelet.service. Sep 6 01:21:22.318772 systemd[1]: Starting kubelet.service... Sep 6 01:21:22.409950 systemd[1]: Started kubelet.service. Sep 6 01:21:22.481539 kubelet[1961]: E0906 01:21:22.481491 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:21:22.483225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:21:22.483363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:21:25.540179 systemd[1]: Stopped kubelet.service. Sep 6 01:21:25.542888 systemd[1]: Starting kubelet.service... Sep 6 01:21:25.576731 systemd[1]: Reloading. Sep 6 01:21:25.666921 /usr/lib/systemd/system-generators/torcx-generator[1994]: time="2025-09-06T01:21:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:21:25.666954 /usr/lib/systemd/system-generators/torcx-generator[1994]: time="2025-09-06T01:21:25Z" level=info msg="torcx already run" Sep 6 01:21:25.744117 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:21:25.744136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:21:25.759593 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:21:25.853820 systemd[1]: Started kubelet.service. Sep 6 01:21:25.858945 systemd[1]: Stopping kubelet.service... Sep 6 01:21:25.859235 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:21:25.859515 systemd[1]: Stopped kubelet.service. Sep 6 01:21:25.861396 systemd[1]: Starting kubelet.service... Sep 6 01:21:26.157961 systemd[1]: Started kubelet.service. Sep 6 01:21:26.197172 kubelet[2065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:26.197172 kubelet[2065]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:21:26.197172 kubelet[2065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:26.197703 kubelet[2065]: I0906 01:21:26.197225 2065 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:21:27.074096 kubelet[2065]: I0906 01:21:27.074057 2065 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 01:21:27.074382 kubelet[2065]: I0906 01:21:27.074370 2065 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:21:27.074759 kubelet[2065]: I0906 01:21:27.074743 2065 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 01:21:27.095860 kubelet[2065]: E0906 01:21:27.095798 2065 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:27.099003 kubelet[2065]: I0906 01:21:27.098955 2065 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:21:27.105429 kubelet[2065]: E0906 01:21:27.105383 2065 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:21:27.105429 kubelet[2065]: I0906 01:21:27.105423 2065 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:21:27.108292 kubelet[2065]: I0906 01:21:27.108255 2065 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:21:27.109193 kubelet[2065]: I0906 01:21:27.109156 2065 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:21:27.109423 kubelet[2065]: I0906 01:21:27.109194 2065 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-dced7724bc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:21:27.109516 kubelet[2065]: I0906 01:21:27.109430 2065 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:21:27.109516 kubelet[2065]: I0906 01:21:27.109439 2065 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 01:21:27.109577 kubelet[2065]: I0906 01:21:27.109561 2065 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:27.112441 kubelet[2065]: I0906 01:21:27.112422 2065 kubelet.go:446] "Attempting to sync node with API server" Sep 6 01:21:27.112505 kubelet[2065]: I0906 01:21:27.112450 2065 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:21:27.112505 kubelet[2065]: I0906 01:21:27.112469 2065 kubelet.go:352] "Adding apiserver pod source" Sep 6 01:21:27.112505 kubelet[2065]: I0906 01:21:27.112478 2065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:21:27.115159 kubelet[2065]: W0906 01:21:27.115106 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:27.115252 kubelet[2065]: E0906 01:21:27.115168 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:27.115322 kubelet[2065]: I0906 01:21:27.115250 2065 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:21:27.115737 kubelet[2065]: I0906 01:21:27.115700 2065 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:21:27.115797 kubelet[2065]: W0906 01:21:27.115763 2065 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:21:27.116342 kubelet[2065]: I0906 01:21:27.116314 2065 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:21:27.116416 kubelet[2065]: I0906 01:21:27.116352 2065 server.go:1287] "Started kubelet" Sep 6 01:21:27.124466 kubelet[2065]: W0906 01:21:27.124416 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-dced7724bc&limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:27.124657 kubelet[2065]: E0906 01:21:27.124638 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-dced7724bc&limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:27.126126 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 01:21:27.126263 kubelet[2065]: E0906 01:21:27.126243 2065 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:21:27.126366 kubelet[2065]: I0906 01:21:27.126283 2065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:21:27.127138 kubelet[2065]: I0906 01:21:27.127074 2065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:21:27.127439 kubelet[2065]: I0906 01:21:27.127408 2065 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:21:27.130312 kubelet[2065]: I0906 01:21:27.130265 2065 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:21:27.131781 kubelet[2065]: E0906 01:21:27.131652 2065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.25:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-dced7724bc.18628cdd63b4421a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-dced7724bc,UID:ci-3510.3.8-n-dced7724bc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-dced7724bc,},FirstTimestamp:2025-09-06 01:21:27.116333594 +0000 UTC m=+0.953731268,LastTimestamp:2025-09-06 01:21:27.116333594 +0000 UTC m=+0.953731268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-dced7724bc,}" Sep 6 01:21:27.131860 kubelet[2065]: I0906 01:21:27.131800 2065 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:21:27.131927 kubelet[2065]: I0906 01:21:27.126312 2065 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:21:27.132737 kubelet[2065]: I0906 01:21:27.132719 2065 server.go:479] "Adding debug handlers to kubelet server" Sep 6 01:21:27.133576 kubelet[2065]: I0906 01:21:27.133548 2065 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:21:27.133637 kubelet[2065]: I0906 01:21:27.133606 2065 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:21:27.134151 kubelet[2065]: E0906 01:21:27.134123 2065 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-dced7724bc\" not found" Sep 6 01:21:27.135140 kubelet[2065]: E0906 01:21:27.135102 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-dced7724bc?timeout=10s\": dial tcp 10.200.20.25:6443: connect: connection refused" interval="200ms" Sep 6 01:21:27.135549 kubelet[2065]: W0906 01:21:27.134340 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:27.135807 kubelet[2065]: E0906 01:21:27.135772 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:27.135888 kubelet[2065]: I0906 01:21:27.134887 2065 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:21:27.136458 kubelet[2065]: I0906 01:21:27.136237 2065 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:21:27.137911 kubelet[2065]: I0906 01:21:27.137891 2065 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:21:27.174986 kubelet[2065]: I0906 01:21:27.174964 2065 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:21:27.175140 kubelet[2065]: I0906 01:21:27.175129 2065 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:21:27.175200 kubelet[2065]: I0906 01:21:27.175191 2065 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:27.183107 kubelet[2065]: I0906 01:21:27.183082 2065 policy_none.go:49] "None policy: Start" Sep 6 01:21:27.183256 kubelet[2065]: I0906 01:21:27.183246 2065 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:21:27.183332 kubelet[2065]: I0906 01:21:27.183323 2065 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:21:27.191175 systemd[1]: Created slice kubepods.slice. Sep 6 01:21:27.195460 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 01:21:27.207397 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 01:21:27.208883 kubelet[2065]: I0906 01:21:27.208855 2065 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:21:27.209115 kubelet[2065]: I0906 01:21:27.209001 2065 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:21:27.209115 kubelet[2065]: I0906 01:21:27.209012 2065 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:21:27.210901 kubelet[2065]: I0906 01:21:27.210871 2065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:21:27.211346 kubelet[2065]: E0906 01:21:27.211310 2065 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:21:27.211538 kubelet[2065]: E0906 01:21:27.211525 2065 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-dced7724bc\" not found" Sep 6 01:21:27.220825 kubelet[2065]: I0906 01:21:27.220796 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:21:27.221961 kubelet[2065]: I0906 01:21:27.221942 2065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:21:27.222224 kubelet[2065]: I0906 01:21:27.222211 2065 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 01:21:27.222657 kubelet[2065]: I0906 01:21:27.222632 2065 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:21:27.222657 kubelet[2065]: I0906 01:21:27.222650 2065 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 01:21:27.222741 kubelet[2065]: E0906 01:21:27.222695 2065 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 01:21:27.223741 kubelet[2065]: W0906 01:21:27.223508 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:27.224139 kubelet[2065]: E0906 01:21:27.224103 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:27.311348 kubelet[2065]: I0906 01:21:27.311313 2065 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.311889 kubelet[2065]: E0906 01:21:27.311866 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.25:6443/api/v1/nodes\": dial tcp 10.200.20.25:6443: connect: connection refused" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.331131 systemd[1]: Created slice kubepods-burstable-podddd36db5ef5a3203dad607f4bc549872.slice. Sep 6 01:21:27.334858 kubelet[2065]: I0906 01:21:27.334833 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddd36db5ef5a3203dad607f4bc549872-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" (UID: \"ddd36db5ef5a3203dad607f4bc549872\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335021 kubelet[2065]: I0906 01:21:27.335002 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335126 kubelet[2065]: I0906 01:21:27.335113 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335558 kubelet[2065]: I0906 01:21:27.335542 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ff039ba80cb8d8a89d2b4fffde1888c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-dced7724bc\" (UID: \"8ff039ba80cb8d8a89d2b4fffde1888c\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335691 kubelet[2065]: I0906 01:21:27.335677 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddd36db5ef5a3203dad607f4bc549872-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" (UID: \"ddd36db5ef5a3203dad607f4bc549872\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335784 kubelet[2065]: I0906 01:21:27.335772 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddd36db5ef5a3203dad607f4bc549872-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" (UID: \"ddd36db5ef5a3203dad607f4bc549872\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335876 kubelet[2065]: I0906 01:21:27.335864 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.335970 kubelet[2065]: I0906 01:21:27.335957 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.336067 kubelet[2065]: I0906 01:21:27.336053 2065 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.336178 kubelet[2065]: E0906 01:21:27.335480 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-dced7724bc?timeout=10s\": dial tcp 10.200.20.25:6443: connect: connection refused" interval="400ms" Sep 6 01:21:27.341087 kubelet[2065]: E0906 01:21:27.341068 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.342163 systemd[1]: Created slice kubepods-burstable-podd0fea30259b5239edbf77c88f7c27449.slice. Sep 6 01:21:27.344650 kubelet[2065]: E0906 01:21:27.344616 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.346598 systemd[1]: Created slice kubepods-burstable-pod8ff039ba80cb8d8a89d2b4fffde1888c.slice. Sep 6 01:21:27.348474 kubelet[2065]: E0906 01:21:27.348450 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.513643 kubelet[2065]: I0906 01:21:27.513614 2065 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.513985 kubelet[2065]: E0906 01:21:27.513958 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.25:6443/api/v1/nodes\": dial tcp 10.200.20.25:6443: connect: connection refused" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.643210 env[1472]: time="2025-09-06T01:21:27.642865992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-dced7724bc,Uid:ddd36db5ef5a3203dad607f4bc549872,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:27.645399 env[1472]: time="2025-09-06T01:21:27.645356267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-dced7724bc,Uid:d0fea30259b5239edbf77c88f7c27449,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:27.649291 env[1472]: time="2025-09-06T01:21:27.649172299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-dced7724bc,Uid:8ff039ba80cb8d8a89d2b4fffde1888c,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:27.718013 kubelet[2065]: E0906 01:21:27.717912 2065 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.25:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-dced7724bc.18628cdd63b4421a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-dced7724bc,UID:ci-3510.3.8-n-dced7724bc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-dced7724bc,},FirstTimestamp:2025-09-06 01:21:27.116333594 +0000 UTC m=+0.953731268,LastTimestamp:2025-09-06 01:21:27.116333594 +0000 UTC m=+0.953731268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-dced7724bc,}" Sep 6 01:21:27.737600 kubelet[2065]: E0906 01:21:27.737565 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-dced7724bc?timeout=10s\": dial tcp 10.200.20.25:6443: connect: connection refused" interval="800ms" Sep 6 01:21:27.916035 kubelet[2065]: I0906 01:21:27.915606 2065 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:27.916035 kubelet[2065]: E0906 01:21:27.915932 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.25:6443/api/v1/nodes\": dial tcp 10.200.20.25:6443: connect: connection refused" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:28.200827 kubelet[2065]: W0906 01:21:28.200476 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:28.200827 kubelet[2065]: E0906 01:21:28.200548 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:28.368663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668710597.mount: Deactivated successfully. Sep 6 01:21:28.405551 env[1472]: time="2025-09-06T01:21:28.405500544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.408385 env[1472]: time="2025-09-06T01:21:28.408354899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.420464 env[1472]: time="2025-09-06T01:21:28.420418075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.423480 env[1472]: time="2025-09-06T01:21:28.423447430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.427830 env[1472]: time="2025-09-06T01:21:28.427802621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.431368 kubelet[2065]: W0906 01:21:28.431256 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-dced7724bc&limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:28.431368 kubelet[2065]: E0906 01:21:28.431339 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-dced7724bc&limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:28.431869 env[1472]: time="2025-09-06T01:21:28.431837333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.436064 env[1472]: time="2025-09-06T01:21:28.436036645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.438692 env[1472]: time="2025-09-06T01:21:28.438660520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.446089 env[1472]: time="2025-09-06T01:21:28.446044946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.451728 env[1472]: time="2025-09-06T01:21:28.451056536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.467992 env[1472]: time="2025-09-06T01:21:28.467942704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.480088 env[1472]: time="2025-09-06T01:21:28.480050161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:28.531219 env[1472]: time="2025-09-06T01:21:28.531020342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:28.531219 env[1472]: time="2025-09-06T01:21:28.531058022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:28.531219 env[1472]: time="2025-09-06T01:21:28.531068102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:28.532089 env[1472]: time="2025-09-06T01:21:28.531747821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbcdb412a9809f48af41bbdc362ac7cba5bec5f62f8177464cc956c9c6a28678 pid=2106 runtime=io.containerd.runc.v2 Sep 6 01:21:28.538632 kubelet[2065]: E0906 01:21:28.538579 2065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-dced7724bc?timeout=10s\": dial tcp 10.200.20.25:6443: connect: connection refused" interval="1.6s" Sep 6 01:21:28.551965 systemd[1]: Started cri-containerd-bbcdb412a9809f48af41bbdc362ac7cba5bec5f62f8177464cc956c9c6a28678.scope. Sep 6 01:21:28.565505 kubelet[2065]: W0906 01:21:28.565469 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:28.565639 kubelet[2065]: E0906 01:21:28.565515 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:28.570358 env[1472]: time="2025-09-06T01:21:28.568508470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:28.570358 env[1472]: time="2025-09-06T01:21:28.568600390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:28.570358 env[1472]: time="2025-09-06T01:21:28.568635670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:28.570358 env[1472]: time="2025-09-06T01:21:28.568790790Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9511ed254997f906d5844d888be9d3e0fd5cc614b4878172b88ac43feb79ef4 pid=2142 runtime=io.containerd.runc.v2 Sep 6 01:21:28.594262 env[1472]: time="2025-09-06T01:21:28.594211181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-dced7724bc,Uid:ddd36db5ef5a3203dad607f4bc549872,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbcdb412a9809f48af41bbdc362ac7cba5bec5f62f8177464cc956c9c6a28678\"" Sep 6 01:21:28.597434 env[1472]: time="2025-09-06T01:21:28.597384655Z" level=info msg="CreateContainer within sandbox \"bbcdb412a9809f48af41bbdc362ac7cba5bec5f62f8177464cc956c9c6a28678\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:21:28.601009 systemd[1]: Started cri-containerd-c9511ed254997f906d5844d888be9d3e0fd5cc614b4878172b88ac43feb79ef4.scope. Sep 6 01:21:28.603016 env[1472]: time="2025-09-06T01:21:28.601397887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:28.603016 env[1472]: time="2025-09-06T01:21:28.601441647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:28.603016 env[1472]: time="2025-09-06T01:21:28.601478087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:28.603016 env[1472]: time="2025-09-06T01:21:28.601676566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bef2c1776d25f7b9fac7d6d1befd93dfa1cd7bb50a29112d794836441bc9b6ff pid=2169 runtime=io.containerd.runc.v2 Sep 6 01:21:28.623912 systemd[1]: Started cri-containerd-bef2c1776d25f7b9fac7d6d1befd93dfa1cd7bb50a29112d794836441bc9b6ff.scope. Sep 6 01:21:28.648904 kubelet[2065]: W0906 01:21:28.648858 2065 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.25:6443: connect: connection refused Sep 6 01:21:28.649043 kubelet[2065]: E0906 01:21:28.648912 2065 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.25:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:21:28.659121 env[1472]: time="2025-09-06T01:21:28.659061896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-dced7724bc,Uid:d0fea30259b5239edbf77c88f7c27449,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9511ed254997f906d5844d888be9d3e0fd5cc614b4878172b88ac43feb79ef4\"" Sep 6 01:21:28.660823 env[1472]: time="2025-09-06T01:21:28.660738253Z" level=info msg="CreateContainer within sandbox \"bbcdb412a9809f48af41bbdc362ac7cba5bec5f62f8177464cc956c9c6a28678\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c66bcfac34f7f159ce6f1d9c0f79431e0c63ed25384baaf00f159e0f91e9665\"" Sep 6 01:21:28.662389 env[1472]: time="2025-09-06T01:21:28.662337370Z" level=info msg="CreateContainer within sandbox \"c9511ed254997f906d5844d888be9d3e0fd5cc614b4878172b88ac43feb79ef4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:21:28.662558 env[1472]: time="2025-09-06T01:21:28.662530809Z" level=info msg="StartContainer for \"1c66bcfac34f7f159ce6f1d9c0f79431e0c63ed25384baaf00f159e0f91e9665\"" Sep 6 01:21:28.671310 env[1472]: time="2025-09-06T01:21:28.671253312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-dced7724bc,Uid:8ff039ba80cb8d8a89d2b4fffde1888c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bef2c1776d25f7b9fac7d6d1befd93dfa1cd7bb50a29112d794836441bc9b6ff\"" Sep 6 01:21:28.674128 env[1472]: time="2025-09-06T01:21:28.674088387Z" level=info msg="CreateContainer within sandbox \"bef2c1776d25f7b9fac7d6d1befd93dfa1cd7bb50a29112d794836441bc9b6ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:21:28.686709 systemd[1]: Started cri-containerd-1c66bcfac34f7f159ce6f1d9c0f79431e0c63ed25384baaf00f159e0f91e9665.scope. Sep 6 01:21:28.725711 kubelet[2065]: I0906 01:21:28.724400 2065 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:28.729240 kubelet[2065]: E0906 01:21:28.729196 2065 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.25:6443/api/v1/nodes\": dial tcp 10.200.20.25:6443: connect: connection refused" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:28.731858 env[1472]: time="2025-09-06T01:21:28.731790436Z" level=info msg="CreateContainer within sandbox \"c9511ed254997f906d5844d888be9d3e0fd5cc614b4878172b88ac43feb79ef4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7b7f58207fe10564b6f78978ed39571ce780df4dd8fe9723cc3ddb253171f69\"" Sep 6 01:21:28.734122 env[1472]: time="2025-09-06T01:21:28.734087031Z" level=info msg="StartContainer for \"c7b7f58207fe10564b6f78978ed39571ce780df4dd8fe9723cc3ddb253171f69\"" Sep 6 01:21:28.742608 env[1472]: time="2025-09-06T01:21:28.741767817Z" level=info msg="StartContainer for \"1c66bcfac34f7f159ce6f1d9c0f79431e0c63ed25384baaf00f159e0f91e9665\" returns successfully" Sep 6 01:21:28.752366 systemd[1]: Started cri-containerd-c7b7f58207fe10564b6f78978ed39571ce780df4dd8fe9723cc3ddb253171f69.scope. Sep 6 01:21:28.761038 env[1472]: time="2025-09-06T01:21:28.760982300Z" level=info msg="CreateContainer within sandbox \"bef2c1776d25f7b9fac7d6d1befd93dfa1cd7bb50a29112d794836441bc9b6ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de80cc654323ab24a1cdddc9efbd507810c1e50603e00de51864c62dc5505c36\"" Sep 6 01:21:28.761514 env[1472]: time="2025-09-06T01:21:28.761480059Z" level=info msg="StartContainer for \"de80cc654323ab24a1cdddc9efbd507810c1e50603e00de51864c62dc5505c36\"" Sep 6 01:21:28.780259 systemd[1]: Started cri-containerd-de80cc654323ab24a1cdddc9efbd507810c1e50603e00de51864c62dc5505c36.scope. Sep 6 01:21:28.820840 env[1472]: time="2025-09-06T01:21:28.820790824Z" level=info msg="StartContainer for \"c7b7f58207fe10564b6f78978ed39571ce780df4dd8fe9723cc3ddb253171f69\" returns successfully" Sep 6 01:21:28.854909 env[1472]: time="2025-09-06T01:21:28.854858599Z" level=info msg="StartContainer for \"de80cc654323ab24a1cdddc9efbd507810c1e50603e00de51864c62dc5505c36\" returns successfully" Sep 6 01:21:29.228755 kubelet[2065]: E0906 01:21:29.228724 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:29.230752 kubelet[2065]: E0906 01:21:29.230726 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:29.232630 kubelet[2065]: E0906 01:21:29.232608 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:29.364668 systemd[1]: run-containerd-runc-k8s.io-bbcdb412a9809f48af41bbdc362ac7cba5bec5f62f8177464cc956c9c6a28678-runc.0xEnY8.mount: Deactivated successfully. Sep 6 01:21:30.234840 kubelet[2065]: E0906 01:21:30.234677 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:30.235716 kubelet[2065]: E0906 01:21:30.235589 2065 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-dced7724bc\" not found" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:30.332881 kubelet[2065]: I0906 01:21:30.332164 2065 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.309992 kubelet[2065]: I0906 01:21:32.309955 2065 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.334932 kubelet[2065]: I0906 01:21:32.334891 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.443178 kubelet[2065]: E0906 01:21:32.443144 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-dced7724bc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.443479 kubelet[2065]: I0906 01:21:32.443464 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.445660 kubelet[2065]: E0906 01:21:32.445627 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.445846 kubelet[2065]: I0906 01:21:32.445832 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.447966 kubelet[2065]: E0906 01:21:32.447918 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.584096 kubelet[2065]: I0906 01:21:32.584059 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.586639 kubelet[2065]: E0906 01:21:32.586604 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.714659 kubelet[2065]: I0906 01:21:32.714621 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:32.719213 kubelet[2065]: E0906 01:21:32.719179 2065 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-dced7724bc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:33.117383 kubelet[2065]: I0906 01:21:33.117336 2065 apiserver.go:52] "Watching apiserver" Sep 6 01:21:33.134343 kubelet[2065]: I0906 01:21:33.134310 2065 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:21:34.363135 kubelet[2065]: I0906 01:21:34.363099 2065 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:34.375261 kubelet[2065]: W0906 01:21:34.375217 2065 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:21:35.176032 systemd[1]: Reloading. Sep 6 01:21:35.250245 /usr/lib/systemd/system-generators/torcx-generator[2364]: time="2025-09-06T01:21:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:21:35.250635 /usr/lib/systemd/system-generators/torcx-generator[2364]: time="2025-09-06T01:21:35Z" level=info msg="torcx already run" Sep 6 01:21:35.331434 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:21:35.331453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:21:35.347239 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:21:35.453601 kubelet[2065]: I0906 01:21:35.453470 2065 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:21:35.456567 systemd[1]: Stopping kubelet.service... Sep 6 01:21:35.477907 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:21:35.478114 systemd[1]: Stopped kubelet.service. Sep 6 01:21:35.478165 systemd[1]: kubelet.service: Consumed 1.320s CPU time. Sep 6 01:21:35.480407 systemd[1]: Starting kubelet.service... Sep 6 01:21:35.572533 systemd[1]: Started kubelet.service. Sep 6 01:21:35.615055 kubelet[2426]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:35.615397 kubelet[2426]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 01:21:35.615451 kubelet[2426]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:21:35.615628 kubelet[2426]: I0906 01:21:35.615598 2426 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:21:35.622501 kubelet[2426]: I0906 01:21:35.622468 2426 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 01:21:35.622501 kubelet[2426]: I0906 01:21:35.622494 2426 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:21:35.622742 kubelet[2426]: I0906 01:21:35.622723 2426 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 01:21:35.623995 kubelet[2426]: I0906 01:21:35.623974 2426 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 01:21:35.626154 kubelet[2426]: I0906 01:21:35.626126 2426 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:21:35.629820 kubelet[2426]: E0906 01:21:35.629776 2426 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:21:35.629820 kubelet[2426]: I0906 01:21:35.629820 2426 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:21:35.632743 kubelet[2426]: I0906 01:21:35.632718 2426 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:21:35.633061 kubelet[2426]: I0906 01:21:35.633028 2426 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:21:35.633435 kubelet[2426]: I0906 01:21:35.633127 2426 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-dced7724bc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 01:21:35.633599 kubelet[2426]: I0906 01:21:35.633585 2426 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:21:35.633663 kubelet[2426]: I0906 01:21:35.633654 2426 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 01:21:35.633760 kubelet[2426]: I0906 01:21:35.633749 2426 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:35.633935 kubelet[2426]: I0906 01:21:35.633923 2426 kubelet.go:446] "Attempting to sync node with API server" Sep 6 01:21:35.634005 kubelet[2426]: I0906 01:21:35.633995 2426 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:21:35.634070 kubelet[2426]: I0906 01:21:35.634061 2426 kubelet.go:352] "Adding apiserver pod source" Sep 6 01:21:35.634140 kubelet[2426]: I0906 01:21:35.634131 2426 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:21:35.643336 kubelet[2426]: I0906 01:21:35.640929 2426 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:21:35.643336 kubelet[2426]: I0906 01:21:35.641402 2426 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:21:35.643336 kubelet[2426]: I0906 01:21:35.641774 2426 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 01:21:35.643336 kubelet[2426]: I0906 01:21:35.641797 2426 server.go:1287] "Started kubelet" Sep 6 01:21:35.643603 kubelet[2426]: I0906 01:21:35.643584 2426 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:21:35.650984 kubelet[2426]: I0906 01:21:35.647578 2426 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:21:35.650984 kubelet[2426]: I0906 01:21:35.648382 2426 server.go:479] "Adding debug handlers to kubelet server" Sep 6 01:21:35.650984 kubelet[2426]: I0906 01:21:35.649220 2426 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:21:35.650984 kubelet[2426]: I0906 01:21:35.649435 2426 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:21:35.650984 kubelet[2426]: I0906 01:21:35.649635 2426 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:21:35.650984 kubelet[2426]: I0906 01:21:35.650728 2426 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 01:21:35.650984 kubelet[2426]: E0906 01:21:35.650908 2426 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-dced7724bc\" not found" Sep 6 01:21:35.652417 kubelet[2426]: I0906 01:21:35.652313 2426 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 01:21:35.652476 kubelet[2426]: I0906 01:21:35.652430 2426 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:21:35.654663 kubelet[2426]: I0906 01:21:35.654640 2426 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:21:35.654896 kubelet[2426]: I0906 01:21:35.654877 2426 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:21:35.660387 kubelet[2426]: I0906 01:21:35.656533 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:21:35.660387 kubelet[2426]: I0906 01:21:35.657321 2426 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:21:35.660387 kubelet[2426]: I0906 01:21:35.657338 2426 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 01:21:35.660387 kubelet[2426]: I0906 01:21:35.657356 2426 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 01:21:35.660387 kubelet[2426]: I0906 01:21:35.657362 2426 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 01:21:35.660387 kubelet[2426]: E0906 01:21:35.657398 2426 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:21:35.673295 kubelet[2426]: I0906 01:21:35.669294 2426 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:21:35.733878 kubelet[2426]: I0906 01:21:35.733778 2426 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 01:21:35.734121 kubelet[2426]: I0906 01:21:35.734089 2426 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 01:21:35.734203 kubelet[2426]: I0906 01:21:35.734193 2426 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:21:35.734477 kubelet[2426]: I0906 01:21:35.734461 2426 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:21:35.734572 kubelet[2426]: I0906 01:21:35.734546 2426 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:21:35.734628 kubelet[2426]: I0906 01:21:35.734619 2426 policy_none.go:49] "None policy: Start" Sep 6 01:21:35.734682 kubelet[2426]: I0906 01:21:35.734674 2426 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 01:21:35.734747 kubelet[2426]: I0906 01:21:35.734738 2426 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:21:35.734914 kubelet[2426]: I0906 01:21:35.734904 2426 state_mem.go:75] "Updated machine memory state" Sep 6 01:21:35.738578 kubelet[2426]: I0906 01:21:35.738556 2426 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:21:35.742250 kubelet[2426]: I0906 01:21:35.742230 2426 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:21:35.742485 kubelet[2426]: I0906 01:21:35.742427 2426 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:21:35.742926 kubelet[2426]: I0906 01:21:35.742911 2426 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:21:35.744603 kubelet[2426]: E0906 01:21:35.744587 2426 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 01:21:35.757832 kubelet[2426]: I0906 01:21:35.757787 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.760450 kubelet[2426]: I0906 01:21:35.760414 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.760552 kubelet[2426]: I0906 01:21:35.760538 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.768936 kubelet[2426]: W0906 01:21:35.768902 2426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:21:35.794477 kubelet[2426]: W0906 01:21:35.793618 2426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:21:35.794477 kubelet[2426]: W0906 01:21:35.794338 2426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:21:35.794477 kubelet[2426]: E0906 01:21:35.794387 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.846087 kubelet[2426]: I0906 01:21:35.846059 2426 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.865953 kubelet[2426]: I0906 01:21:35.865917 2426 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.866092 kubelet[2426]: I0906 01:21:35.866015 2426 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.953636 kubelet[2426]: I0906 01:21:35.953597 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddd36db5ef5a3203dad607f4bc549872-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" (UID: \"ddd36db5ef5a3203dad607f4bc549872\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954084 kubelet[2426]: I0906 01:21:35.953645 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddd36db5ef5a3203dad607f4bc549872-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" (UID: \"ddd36db5ef5a3203dad607f4bc549872\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954084 kubelet[2426]: I0906 01:21:35.953667 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954084 kubelet[2426]: I0906 01:21:35.953707 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954084 kubelet[2426]: I0906 01:21:35.953726 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954084 kubelet[2426]: I0906 01:21:35.953744 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ff039ba80cb8d8a89d2b4fffde1888c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-dced7724bc\" (UID: \"8ff039ba80cb8d8a89d2b4fffde1888c\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954216 kubelet[2426]: I0906 01:21:35.953761 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddd36db5ef5a3203dad607f4bc549872-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" (UID: \"ddd36db5ef5a3203dad607f4bc549872\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954216 kubelet[2426]: I0906 01:21:35.953788 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:35.954216 kubelet[2426]: I0906 01:21:35.953804 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0fea30259b5239edbf77c88f7c27449-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-dced7724bc\" (UID: \"d0fea30259b5239edbf77c88f7c27449\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:36.212008 sudo[2457]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 01:21:36.212623 sudo[2457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 01:21:36.636630 kubelet[2426]: I0906 01:21:36.636596 2426 apiserver.go:52] "Watching apiserver" Sep 6 01:21:36.653303 kubelet[2426]: I0906 01:21:36.653258 2426 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 01:21:36.713425 kubelet[2426]: I0906 01:21:36.713397 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:36.713879 kubelet[2426]: I0906 01:21:36.713773 2426 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:36.726910 kubelet[2426]: W0906 01:21:36.726877 2426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:21:36.727170 kubelet[2426]: E0906 01:21:36.727136 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-dced7724bc\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:36.730930 kubelet[2426]: W0906 01:21:36.730908 2426 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:21:36.731105 kubelet[2426]: E0906 01:21:36.731089 2426 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-dced7724bc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" Sep 6 01:21:36.746729 sudo[2457]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:36.782127 kubelet[2426]: I0906 01:21:36.782057 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-dced7724bc" podStartSLOduration=1.782040294 podStartE2EDuration="1.782040294s" podCreationTimestamp="2025-09-06 01:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:36.755357295 +0000 UTC m=+1.177540990" watchObservedRunningTime="2025-09-06 01:21:36.782040294 +0000 UTC m=+1.204223989" Sep 6 01:21:36.805536 kubelet[2426]: I0906 01:21:36.805473 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-dced7724bc" podStartSLOduration=1.8054553370000002 podStartE2EDuration="1.805455337s" podCreationTimestamp="2025-09-06 01:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:36.783216492 +0000 UTC m=+1.205400187" watchObservedRunningTime="2025-09-06 01:21:36.805455337 +0000 UTC m=+1.227639032" Sep 6 01:21:36.825581 kubelet[2426]: I0906 01:21:36.825533 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-dced7724bc" podStartSLOduration=2.8254966660000003 podStartE2EDuration="2.825496666s" podCreationTimestamp="2025-09-06 01:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:36.806810415 +0000 UTC m=+1.228994070" watchObservedRunningTime="2025-09-06 01:21:36.825496666 +0000 UTC m=+1.247680361" Sep 6 01:21:38.617613 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 6 01:21:38.706961 sshd[1750]: pam_unix(sshd:session): session closed for user core Sep 6 01:21:38.709328 systemd[1]: sshd@4-10.200.20.25:22-10.200.16.10:56822.service: Deactivated successfully. Sep 6 01:21:38.710052 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:21:38.710202 systemd[1]: session-7.scope: Consumed 7.753s CPU time. Sep 6 01:21:38.710608 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:21:38.711589 systemd-logind[1463]: Removed session 7. Sep 6 01:21:41.112509 kubelet[2426]: I0906 01:21:41.112411 2426 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:21:41.113233 env[1472]: time="2025-09-06T01:21:41.113191397Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:21:41.113658 kubelet[2426]: I0906 01:21:41.113641 2426 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:21:41.690153 systemd[1]: Created slice kubepods-besteffort-podb8a7bb23_a990_484a_94cb_31ae480fe29b.slice. Sep 6 01:21:41.703866 systemd[1]: Created slice kubepods-burstable-pod929a9d46_8e13_4d36_a4b6_93822e1ec811.slice. Sep 6 01:21:41.782616 kubelet[2426]: I0906 01:21:41.782574 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-hostproc\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782616 kubelet[2426]: I0906 01:21:41.782616 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cni-path\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782793 kubelet[2426]: I0906 01:21:41.782635 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5d4t\" (UniqueName: \"kubernetes.io/projected/b8a7bb23-a990-484a-94cb-31ae480fe29b-kube-api-access-l5d4t\") pod \"kube-proxy-c4p9n\" (UID: \"b8a7bb23-a990-484a-94cb-31ae480fe29b\") " pod="kube-system/kube-proxy-c4p9n" Sep 6 01:21:41.782793 kubelet[2426]: I0906 01:21:41.782656 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-kernel\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782793 kubelet[2426]: I0906 01:21:41.782672 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8a7bb23-a990-484a-94cb-31ae480fe29b-xtables-lock\") pod \"kube-proxy-c4p9n\" (UID: \"b8a7bb23-a990-484a-94cb-31ae480fe29b\") " pod="kube-system/kube-proxy-c4p9n" Sep 6 01:21:41.782793 kubelet[2426]: I0906 01:21:41.782685 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-etc-cni-netd\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782793 kubelet[2426]: I0906 01:21:41.782700 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-config-path\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782914 kubelet[2426]: I0906 01:21:41.782717 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-hubble-tls\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782914 kubelet[2426]: I0906 01:21:41.782731 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8a7bb23-a990-484a-94cb-31ae480fe29b-kube-proxy\") pod \"kube-proxy-c4p9n\" (UID: \"b8a7bb23-a990-484a-94cb-31ae480fe29b\") " pod="kube-system/kube-proxy-c4p9n" Sep 6 01:21:41.782914 kubelet[2426]: I0906 01:21:41.782746 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-cgroup\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782914 kubelet[2426]: I0906 01:21:41.782761 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz89\" (UniqueName: \"kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-kube-api-access-4qz89\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.782914 kubelet[2426]: I0906 01:21:41.782776 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8a7bb23-a990-484a-94cb-31ae480fe29b-lib-modules\") pod \"kube-proxy-c4p9n\" (UID: \"b8a7bb23-a990-484a-94cb-31ae480fe29b\") " pod="kube-system/kube-proxy-c4p9n" Sep 6 01:21:41.782914 kubelet[2426]: I0906 01:21:41.782794 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-run\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.783046 kubelet[2426]: I0906 01:21:41.782810 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-lib-modules\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.783046 kubelet[2426]: I0906 01:21:41.782828 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-bpf-maps\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.783046 kubelet[2426]: I0906 01:21:41.782844 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-xtables-lock\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.783046 kubelet[2426]: I0906 01:21:41.782860 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/929a9d46-8e13-4d36-a4b6-93822e1ec811-clustermesh-secrets\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.783046 kubelet[2426]: I0906 01:21:41.782875 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-net\") pod \"cilium-kc6dp\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " pod="kube-system/cilium-kc6dp" Sep 6 01:21:41.884116 kubelet[2426]: I0906 01:21:41.884071 2426 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:21:41.907055 kubelet[2426]: E0906 01:21:41.907019 2426 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 01:21:41.907055 kubelet[2426]: E0906 01:21:41.907053 2426 projected.go:194] Error preparing data for projected volume kube-api-access-4qz89 for pod kube-system/cilium-kc6dp: configmap "kube-root-ca.crt" not found Sep 6 01:21:41.907211 kubelet[2426]: E0906 01:21:41.907114 2426 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-kube-api-access-4qz89 podName:929a9d46-8e13-4d36-a4b6-93822e1ec811 nodeName:}" failed. No retries permitted until 2025-09-06 01:21:42.407093749 +0000 UTC m=+6.829277404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4qz89" (UniqueName: "kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-kube-api-access-4qz89") pod "cilium-kc6dp" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811") : configmap "kube-root-ca.crt" not found Sep 6 01:21:41.910611 kubelet[2426]: E0906 01:21:41.910576 2426 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 6 01:21:41.910611 kubelet[2426]: E0906 01:21:41.910607 2426 projected.go:194] Error preparing data for projected volume kube-api-access-l5d4t for pod kube-system/kube-proxy-c4p9n: configmap "kube-root-ca.crt" not found Sep 6 01:21:41.910734 kubelet[2426]: E0906 01:21:41.910678 2426 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8a7bb23-a990-484a-94cb-31ae480fe29b-kube-api-access-l5d4t podName:b8a7bb23-a990-484a-94cb-31ae480fe29b nodeName:}" failed. No retries permitted until 2025-09-06 01:21:42.410639305 +0000 UTC m=+6.832823000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l5d4t" (UniqueName: "kubernetes.io/projected/b8a7bb23-a990-484a-94cb-31ae480fe29b-kube-api-access-l5d4t") pod "kube-proxy-c4p9n" (UID: "b8a7bb23-a990-484a-94cb-31ae480fe29b") : configmap "kube-root-ca.crt" not found Sep 6 01:21:42.187784 systemd[1]: Created slice kubepods-besteffort-podae440b46_1442_4068_ad0a_06eb6db20fff.slice. Sep 6 01:21:42.286908 kubelet[2426]: I0906 01:21:42.286853 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae440b46-1442-4068-ad0a-06eb6db20fff-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2nhjl\" (UID: \"ae440b46-1442-4068-ad0a-06eb6db20fff\") " pod="kube-system/cilium-operator-6c4d7847fc-2nhjl" Sep 6 01:21:42.286908 kubelet[2426]: I0906 01:21:42.286908 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsz88\" (UniqueName: \"kubernetes.io/projected/ae440b46-1442-4068-ad0a-06eb6db20fff-kube-api-access-qsz88\") pod \"cilium-operator-6c4d7847fc-2nhjl\" (UID: \"ae440b46-1442-4068-ad0a-06eb6db20fff\") " pod="kube-system/cilium-operator-6c4d7847fc-2nhjl" Sep 6 01:21:42.494749 env[1472]: time="2025-09-06T01:21:42.493963002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2nhjl,Uid:ae440b46-1442-4068-ad0a-06eb6db20fff,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:42.530688 env[1472]: time="2025-09-06T01:21:42.530605353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:42.530808 env[1472]: time="2025-09-06T01:21:42.530701873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:42.530808 env[1472]: time="2025-09-06T01:21:42.530726513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:42.531021 env[1472]: time="2025-09-06T01:21:42.530973553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46 pid=2510 runtime=io.containerd.runc.v2 Sep 6 01:21:42.541603 systemd[1]: Started cri-containerd-3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46.scope. Sep 6 01:21:42.572857 env[1472]: time="2025-09-06T01:21:42.572817417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2nhjl,Uid:ae440b46-1442-4068-ad0a-06eb6db20fff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46\"" Sep 6 01:21:42.575937 env[1472]: time="2025-09-06T01:21:42.575863693Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:21:42.602691 env[1472]: time="2025-09-06T01:21:42.602643897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4p9n,Uid:b8a7bb23-a990-484a-94cb-31ae480fe29b,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:42.607140 env[1472]: time="2025-09-06T01:21:42.607079051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kc6dp,Uid:929a9d46-8e13-4d36-a4b6-93822e1ec811,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:42.700666 env[1472]: time="2025-09-06T01:21:42.698887808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:42.700666 env[1472]: time="2025-09-06T01:21:42.698925088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:42.700666 env[1472]: time="2025-09-06T01:21:42.698934488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:42.700666 env[1472]: time="2025-09-06T01:21:42.699140288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26 pid=2566 runtime=io.containerd.runc.v2 Sep 6 01:21:42.701345 env[1472]: time="2025-09-06T01:21:42.694326295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:21:42.701345 env[1472]: time="2025-09-06T01:21:42.694363295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:21:42.701345 env[1472]: time="2025-09-06T01:21:42.694373415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:21:42.701345 env[1472]: time="2025-09-06T01:21:42.694537094Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02fa90fa64315c4aa694604de04202470e6acb76ac1f375d3ec610abf0f5bf2c pid=2549 runtime=io.containerd.runc.v2 Sep 6 01:21:42.713161 systemd[1]: Started cri-containerd-02fa90fa64315c4aa694604de04202470e6acb76ac1f375d3ec610abf0f5bf2c.scope. Sep 6 01:21:42.715598 systemd[1]: Started cri-containerd-b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26.scope. Sep 6 01:21:42.751476 env[1472]: time="2025-09-06T01:21:42.751326138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4p9n,Uid:b8a7bb23-a990-484a-94cb-31ae480fe29b,Namespace:kube-system,Attempt:0,} returns sandbox id \"02fa90fa64315c4aa694604de04202470e6acb76ac1f375d3ec610abf0f5bf2c\"" Sep 6 01:21:42.756025 env[1472]: time="2025-09-06T01:21:42.755987572Z" level=info msg="CreateContainer within sandbox \"02fa90fa64315c4aa694604de04202470e6acb76ac1f375d3ec610abf0f5bf2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:21:42.771552 env[1472]: time="2025-09-06T01:21:42.771292432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kc6dp,Uid:929a9d46-8e13-4d36-a4b6-93822e1ec811,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\"" Sep 6 01:21:42.810504 env[1472]: time="2025-09-06T01:21:42.810455659Z" level=info msg="CreateContainer within sandbox \"02fa90fa64315c4aa694604de04202470e6acb76ac1f375d3ec610abf0f5bf2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6eee786b7fb015b097713e0a42abf94f32b39dbf96bd1f03094d765df1e67220\"" Sep 6 01:21:42.812139 env[1472]: time="2025-09-06T01:21:42.812034377Z" level=info msg="StartContainer for \"6eee786b7fb015b097713e0a42abf94f32b39dbf96bd1f03094d765df1e67220\"" Sep 6 01:21:42.829517 systemd[1]: Started cri-containerd-6eee786b7fb015b097713e0a42abf94f32b39dbf96bd1f03094d765df1e67220.scope. Sep 6 01:21:42.864617 env[1472]: time="2025-09-06T01:21:42.864561867Z" level=info msg="StartContainer for \"6eee786b7fb015b097713e0a42abf94f32b39dbf96bd1f03094d765df1e67220\" returns successfully" Sep 6 01:21:43.947561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753349486.mount: Deactivated successfully. Sep 6 01:21:44.625827 env[1472]: time="2025-09-06T01:21:44.622129632Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:44.632684 env[1472]: time="2025-09-06T01:21:44.632638859Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:44.637736 env[1472]: time="2025-09-06T01:21:44.637700893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:44.638266 env[1472]: time="2025-09-06T01:21:44.638232772Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 01:21:44.641263 env[1472]: time="2025-09-06T01:21:44.640232969Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 01:21:44.642126 env[1472]: time="2025-09-06T01:21:44.642085087Z" level=info msg="CreateContainer within sandbox \"3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 01:21:44.671150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063561658.mount: Deactivated successfully. Sep 6 01:21:44.687445 env[1472]: time="2025-09-06T01:21:44.687391749Z" level=info msg="CreateContainer within sandbox \"3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\"" Sep 6 01:21:44.688324 env[1472]: time="2025-09-06T01:21:44.688295788Z" level=info msg="StartContainer for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\"" Sep 6 01:21:44.705984 systemd[1]: Started cri-containerd-b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3.scope. Sep 6 01:21:44.711509 kubelet[2426]: I0906 01:21:44.711180 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c4p9n" podStartSLOduration=3.711160719 podStartE2EDuration="3.711160719s" podCreationTimestamp="2025-09-06 01:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:21:43.75721266 +0000 UTC m=+8.179396315" watchObservedRunningTime="2025-09-06 01:21:44.711160719 +0000 UTC m=+9.133344414" Sep 6 01:21:44.744313 env[1472]: time="2025-09-06T01:21:44.744251637Z" level=info msg="StartContainer for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" returns successfully" Sep 6 01:21:44.939769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470972417.mount: Deactivated successfully. Sep 6 01:21:45.752009 kubelet[2426]: I0906 01:21:45.751371 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2nhjl" podStartSLOduration=1.686376025 podStartE2EDuration="3.75134306s" podCreationTimestamp="2025-09-06 01:21:42 +0000 UTC" firstStartedPulling="2025-09-06 01:21:42.574567535 +0000 UTC m=+6.996751190" lastFinishedPulling="2025-09-06 01:21:44.63953453 +0000 UTC m=+9.061718225" observedRunningTime="2025-09-06 01:21:45.75126182 +0000 UTC m=+10.173445515" watchObservedRunningTime="2025-09-06 01:21:45.75134306 +0000 UTC m=+10.173526715" Sep 6 01:21:48.798086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334233533.mount: Deactivated successfully. Sep 6 01:21:51.236857 env[1472]: time="2025-09-06T01:21:51.236809247Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.252823 env[1472]: time="2025-09-06T01:21:51.252768030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.258730 env[1472]: time="2025-09-06T01:21:51.258687144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:21:51.259363 env[1472]: time="2025-09-06T01:21:51.259331463Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 01:21:51.263321 env[1472]: time="2025-09-06T01:21:51.263246939Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:21:51.291159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175147876.mount: Deactivated successfully. Sep 6 01:21:51.297403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591339912.mount: Deactivated successfully. Sep 6 01:21:51.312053 env[1472]: time="2025-09-06T01:21:51.312005407Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\"" Sep 6 01:21:51.314098 env[1472]: time="2025-09-06T01:21:51.313569685Z" level=info msg="StartContainer for \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\"" Sep 6 01:21:51.330686 systemd[1]: Started cri-containerd-e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e.scope. Sep 6 01:21:51.369981 env[1472]: time="2025-09-06T01:21:51.369933264Z" level=info msg="StartContainer for \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\" returns successfully" Sep 6 01:21:51.375186 systemd[1]: cri-containerd-e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e.scope: Deactivated successfully. Sep 6 01:21:52.288749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e-rootfs.mount: Deactivated successfully. Sep 6 01:21:53.086970 env[1472]: time="2025-09-06T01:21:53.086909489Z" level=info msg="shim disconnected" id=e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e Sep 6 01:21:53.086970 env[1472]: time="2025-09-06T01:21:53.086967249Z" level=warning msg="cleaning up after shim disconnected" id=e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e namespace=k8s.io Sep 6 01:21:53.086970 env[1472]: time="2025-09-06T01:21:53.086977649Z" level=info msg="cleaning up dead shim" Sep 6 01:21:53.093582 env[1472]: time="2025-09-06T01:21:53.093532523Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2878 runtime=io.containerd.runc.v2\n" Sep 6 01:21:53.755837 env[1472]: time="2025-09-06T01:21:53.754724005Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:21:53.805388 env[1472]: time="2025-09-06T01:21:53.805338713Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\"" Sep 6 01:21:53.806125 env[1472]: time="2025-09-06T01:21:53.806014112Z" level=info msg="StartContainer for \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\"" Sep 6 01:21:53.822551 systemd[1]: Started cri-containerd-1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d.scope. Sep 6 01:21:53.857684 env[1472]: time="2025-09-06T01:21:53.857619179Z" level=info msg="StartContainer for \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\" returns successfully" Sep 6 01:21:53.865616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:21:53.865804 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:21:53.865964 systemd[1]: Stopping systemd-sysctl.service... Sep 6 01:21:53.867644 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:21:53.870204 systemd[1]: cri-containerd-1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d.scope: Deactivated successfully. Sep 6 01:21:53.879018 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:21:53.900751 env[1472]: time="2025-09-06T01:21:53.900706295Z" level=info msg="shim disconnected" id=1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d Sep 6 01:21:53.901003 env[1472]: time="2025-09-06T01:21:53.900984455Z" level=warning msg="cleaning up after shim disconnected" id=1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d namespace=k8s.io Sep 6 01:21:53.901071 env[1472]: time="2025-09-06T01:21:53.901057055Z" level=info msg="cleaning up dead shim" Sep 6 01:21:53.908837 env[1472]: time="2025-09-06T01:21:53.908800527Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2944 runtime=io.containerd.runc.v2\n" Sep 6 01:21:54.763307 env[1472]: time="2025-09-06T01:21:54.763255548Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:21:54.785071 systemd[1]: run-containerd-runc-k8s.io-1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d-runc.2asM82.mount: Deactivated successfully. Sep 6 01:21:54.785165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d-rootfs.mount: Deactivated successfully. Sep 6 01:21:54.797382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611110642.mount: Deactivated successfully. Sep 6 01:21:54.809567 env[1472]: time="2025-09-06T01:21:54.809519302Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\"" Sep 6 01:21:54.810464 env[1472]: time="2025-09-06T01:21:54.810437901Z" level=info msg="StartContainer for \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\"" Sep 6 01:21:54.831538 systemd[1]: Started cri-containerd-9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634.scope. Sep 6 01:21:54.862637 systemd[1]: cri-containerd-9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634.scope: Deactivated successfully. Sep 6 01:21:54.868076 env[1472]: time="2025-09-06T01:21:54.868027203Z" level=info msg="StartContainer for \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\" returns successfully" Sep 6 01:21:54.910006 env[1472]: time="2025-09-06T01:21:54.909959241Z" level=info msg="shim disconnected" id=9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634 Sep 6 01:21:54.910333 env[1472]: time="2025-09-06T01:21:54.910313481Z" level=warning msg="cleaning up after shim disconnected" id=9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634 namespace=k8s.io Sep 6 01:21:54.910432 env[1472]: time="2025-09-06T01:21:54.910418161Z" level=info msg="cleaning up dead shim" Sep 6 01:21:54.916622 env[1472]: time="2025-09-06T01:21:54.916588315Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3002 runtime=io.containerd.runc.v2\n" Sep 6 01:21:55.766362 env[1472]: time="2025-09-06T01:21:55.764094643Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:21:55.785055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634-rootfs.mount: Deactivated successfully. Sep 6 01:21:55.802825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936254572.mount: Deactivated successfully. Sep 6 01:21:55.814649 env[1472]: time="2025-09-06T01:21:55.814606033Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\"" Sep 6 01:21:55.817012 env[1472]: time="2025-09-06T01:21:55.816970311Z" level=info msg="StartContainer for \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\"" Sep 6 01:21:55.831739 systemd[1]: Started cri-containerd-8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e.scope. Sep 6 01:21:55.856777 systemd[1]: cri-containerd-8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e.scope: Deactivated successfully. Sep 6 01:21:55.858497 env[1472]: time="2025-09-06T01:21:55.858361310Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod929a9d46_8e13_4d36_a4b6_93822e1ec811.slice/cri-containerd-8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e.scope/memory.events\": no such file or directory" Sep 6 01:21:55.865216 env[1472]: time="2025-09-06T01:21:55.865177224Z" level=info msg="StartContainer for \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\" returns successfully" Sep 6 01:21:55.896670 env[1472]: time="2025-09-06T01:21:55.896620153Z" level=info msg="shim disconnected" id=8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e Sep 6 01:21:55.896670 env[1472]: time="2025-09-06T01:21:55.896665713Z" level=warning msg="cleaning up after shim disconnected" id=8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e namespace=k8s.io Sep 6 01:21:55.896670 env[1472]: time="2025-09-06T01:21:55.896675233Z" level=info msg="cleaning up dead shim" Sep 6 01:21:55.903753 env[1472]: time="2025-09-06T01:21:55.903701786Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:21:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3062 runtime=io.containerd.runc.v2\n" Sep 6 01:21:56.769998 env[1472]: time="2025-09-06T01:21:56.768016996Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:21:56.798218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857591094.mount: Deactivated successfully. Sep 6 01:21:56.813725 env[1472]: time="2025-09-06T01:21:56.813674112Z" level=info msg="CreateContainer within sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\"" Sep 6 01:21:56.815161 env[1472]: time="2025-09-06T01:21:56.815135151Z" level=info msg="StartContainer for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\"" Sep 6 01:21:56.829880 systemd[1]: Started cri-containerd-c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4.scope. Sep 6 01:21:56.861759 env[1472]: time="2025-09-06T01:21:56.861708946Z" level=info msg="StartContainer for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" returns successfully" Sep 6 01:21:56.945039 kubelet[2426]: I0906 01:21:56.944318 2426 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 01:21:56.964298 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:21:56.998544 systemd[1]: Created slice kubepods-burstable-podbed495fe_239a_4cf8_977c_cc8dd3f72b3d.slice. Sep 6 01:21:57.007844 systemd[1]: Created slice kubepods-burstable-poda34ec212_183f_4366_afa8_74e1610bf650.slice. Sep 6 01:21:57.073882 kubelet[2426]: I0906 01:21:57.073847 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td722\" (UniqueName: \"kubernetes.io/projected/a34ec212-183f-4366-afa8-74e1610bf650-kube-api-access-td722\") pod \"coredns-668d6bf9bc-wth8q\" (UID: \"a34ec212-183f-4366-afa8-74e1610bf650\") " pod="kube-system/coredns-668d6bf9bc-wth8q" Sep 6 01:21:57.074124 kubelet[2426]: I0906 01:21:57.074110 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmvw\" (UniqueName: \"kubernetes.io/projected/bed495fe-239a-4cf8-977c-cc8dd3f72b3d-kube-api-access-4tmvw\") pod \"coredns-668d6bf9bc-cwqnv\" (UID: \"bed495fe-239a-4cf8-977c-cc8dd3f72b3d\") " pod="kube-system/coredns-668d6bf9bc-cwqnv" Sep 6 01:21:57.074210 kubelet[2426]: I0906 01:21:57.074197 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a34ec212-183f-4366-afa8-74e1610bf650-config-volume\") pod \"coredns-668d6bf9bc-wth8q\" (UID: \"a34ec212-183f-4366-afa8-74e1610bf650\") " pod="kube-system/coredns-668d6bf9bc-wth8q" Sep 6 01:21:57.074328 kubelet[2426]: I0906 01:21:57.074313 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bed495fe-239a-4cf8-977c-cc8dd3f72b3d-config-volume\") pod \"coredns-668d6bf9bc-cwqnv\" (UID: \"bed495fe-239a-4cf8-977c-cc8dd3f72b3d\") " pod="kube-system/coredns-668d6bf9bc-cwqnv" Sep 6 01:21:57.303724 env[1472]: time="2025-09-06T01:21:57.303677129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwqnv,Uid:bed495fe-239a-4cf8-977c-cc8dd3f72b3d,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:57.310731 env[1472]: time="2025-09-06T01:21:57.310683603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wth8q,Uid:a34ec212-183f-4366-afa8-74e1610bf650,Namespace:kube-system,Attempt:0,}" Sep 6 01:21:57.389390 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:21:59.044869 systemd-networkd[1623]: cilium_host: Link UP Sep 6 01:21:59.051397 systemd-networkd[1623]: cilium_net: Link UP Sep 6 01:21:59.053524 systemd-networkd[1623]: cilium_net: Gained carrier Sep 6 01:21:59.059520 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 01:21:59.059601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 01:21:59.063747 systemd-networkd[1623]: cilium_host: Gained carrier Sep 6 01:21:59.180447 systemd-networkd[1623]: cilium_vxlan: Link UP Sep 6 01:21:59.180453 systemd-networkd[1623]: cilium_vxlan: Gained carrier Sep 6 01:21:59.414299 kernel: NET: Registered PF_ALG protocol family Sep 6 01:21:59.689433 systemd-networkd[1623]: cilium_net: Gained IPv6LL Sep 6 01:22:00.073397 systemd-networkd[1623]: cilium_host: Gained IPv6LL Sep 6 01:22:00.157782 systemd-networkd[1623]: lxc_health: Link UP Sep 6 01:22:00.190307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:22:00.190593 systemd-networkd[1623]: lxc_health: Gained carrier Sep 6 01:22:00.379807 systemd-networkd[1623]: lxcbf673193b96c: Link UP Sep 6 01:22:00.388332 kernel: eth0: renamed from tmp3c746 Sep 6 01:22:00.404346 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbf673193b96c: link becomes ready Sep 6 01:22:00.401613 systemd-networkd[1623]: lxcbf673193b96c: Gained carrier Sep 6 01:22:00.409048 systemd-networkd[1623]: lxc615695111cea: Link UP Sep 6 01:22:00.417302 kernel: eth0: renamed from tmp784aa Sep 6 01:22:00.428501 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc615695111cea: link becomes ready Sep 6 01:22:00.428235 systemd-networkd[1623]: lxc615695111cea: Gained carrier Sep 6 01:22:00.638535 kubelet[2426]: I0906 01:22:00.638389 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kc6dp" podStartSLOduration=11.151129232 podStartE2EDuration="19.638370144s" podCreationTimestamp="2025-09-06 01:21:41 +0000 UTC" firstStartedPulling="2025-09-06 01:21:42.77293067 +0000 UTC m=+7.195114325" lastFinishedPulling="2025-09-06 01:21:51.260171542 +0000 UTC m=+15.682355237" observedRunningTime="2025-09-06 01:21:57.79367403 +0000 UTC m=+22.215857725" watchObservedRunningTime="2025-09-06 01:22:00.638370144 +0000 UTC m=+25.060553799" Sep 6 01:22:00.649496 systemd-networkd[1623]: cilium_vxlan: Gained IPv6LL Sep 6 01:22:01.609605 systemd-networkd[1623]: lxcbf673193b96c: Gained IPv6LL Sep 6 01:22:01.929517 systemd-networkd[1623]: lxc615695111cea: Gained IPv6LL Sep 6 01:22:02.121442 systemd-networkd[1623]: lxc_health: Gained IPv6LL Sep 6 01:22:04.069342 env[1472]: time="2025-09-06T01:22:04.067603890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:04.069342 env[1472]: time="2025-09-06T01:22:04.067650410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:04.069342 env[1472]: time="2025-09-06T01:22:04.067660890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:04.069342 env[1472]: time="2025-09-06T01:22:04.067759850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c7462dad15b1ead4c1733490f5d4563e4f7f3d63ffec03391cd2790348ea97b pid=3606 runtime=io.containerd.runc.v2 Sep 6 01:22:04.078329 env[1472]: time="2025-09-06T01:22:04.076290883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:22:04.078329 env[1472]: time="2025-09-06T01:22:04.076326283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:22:04.078329 env[1472]: time="2025-09-06T01:22:04.076336603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:22:04.078329 env[1472]: time="2025-09-06T01:22:04.076439563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/784aa7de4af6fdfcf06d40c2893a2ac38ab6316fa8210a9126b4f32f4caa534f pid=3622 runtime=io.containerd.runc.v2 Sep 6 01:22:04.103061 systemd[1]: run-containerd-runc-k8s.io-3c7462dad15b1ead4c1733490f5d4563e4f7f3d63ffec03391cd2790348ea97b-runc.q7o4aR.mount: Deactivated successfully. Sep 6 01:22:04.107978 systemd[1]: Started cri-containerd-3c7462dad15b1ead4c1733490f5d4563e4f7f3d63ffec03391cd2790348ea97b.scope. Sep 6 01:22:04.112999 systemd[1]: Started cri-containerd-784aa7de4af6fdfcf06d40c2893a2ac38ab6316fa8210a9126b4f32f4caa534f.scope. Sep 6 01:22:04.153492 env[1472]: time="2025-09-06T01:22:04.153446181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wth8q,Uid:a34ec212-183f-4366-afa8-74e1610bf650,Namespace:kube-system,Attempt:0,} returns sandbox id \"784aa7de4af6fdfcf06d40c2893a2ac38ab6316fa8210a9126b4f32f4caa534f\"" Sep 6 01:22:04.157603 env[1472]: time="2025-09-06T01:22:04.157563818Z" level=info msg="CreateContainer within sandbox \"784aa7de4af6fdfcf06d40c2893a2ac38ab6316fa8210a9126b4f32f4caa534f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:22:04.181213 env[1472]: time="2025-09-06T01:22:04.181174119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwqnv,Uid:bed495fe-239a-4cf8-977c-cc8dd3f72b3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c7462dad15b1ead4c1733490f5d4563e4f7f3d63ffec03391cd2790348ea97b\"" Sep 6 01:22:04.184115 env[1472]: time="2025-09-06T01:22:04.184081916Z" level=info msg="CreateContainer within sandbox \"3c7462dad15b1ead4c1733490f5d4563e4f7f3d63ffec03391cd2790348ea97b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:22:04.226129 env[1472]: time="2025-09-06T01:22:04.226073962Z" level=info msg="CreateContainer within sandbox \"784aa7de4af6fdfcf06d40c2893a2ac38ab6316fa8210a9126b4f32f4caa534f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e72194b218c116144d1abf6db3ed6c95cdc27d8fd04a4bf60646424f9f87e08\"" Sep 6 01:22:04.226857 env[1472]: time="2025-09-06T01:22:04.226833522Z" level=info msg="StartContainer for \"4e72194b218c116144d1abf6db3ed6c95cdc27d8fd04a4bf60646424f9f87e08\"" Sep 6 01:22:04.237470 env[1472]: time="2025-09-06T01:22:04.237422273Z" level=info msg="CreateContainer within sandbox \"3c7462dad15b1ead4c1733490f5d4563e4f7f3d63ffec03391cd2790348ea97b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71c645e95c773bfabd37a72a87db19b6dc8282fb74f7982c91076fbae6bf40df\"" Sep 6 01:22:04.238356 env[1472]: time="2025-09-06T01:22:04.238318873Z" level=info msg="StartContainer for \"71c645e95c773bfabd37a72a87db19b6dc8282fb74f7982c91076fbae6bf40df\"" Sep 6 01:22:04.264433 systemd[1]: Started cri-containerd-4e72194b218c116144d1abf6db3ed6c95cdc27d8fd04a4bf60646424f9f87e08.scope. Sep 6 01:22:04.267156 systemd[1]: Started cri-containerd-71c645e95c773bfabd37a72a87db19b6dc8282fb74f7982c91076fbae6bf40df.scope. Sep 6 01:22:04.305230 env[1472]: time="2025-09-06T01:22:04.305179499Z" level=info msg="StartContainer for \"71c645e95c773bfabd37a72a87db19b6dc8282fb74f7982c91076fbae6bf40df\" returns successfully" Sep 6 01:22:04.310426 env[1472]: time="2025-09-06T01:22:04.310381455Z" level=info msg="StartContainer for \"4e72194b218c116144d1abf6db3ed6c95cdc27d8fd04a4bf60646424f9f87e08\" returns successfully" Sep 6 01:22:04.802696 kubelet[2426]: I0906 01:22:04.802616 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wth8q" podStartSLOduration=22.802597258 podStartE2EDuration="22.802597258s" podCreationTimestamp="2025-09-06 01:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:04.800966779 +0000 UTC m=+29.223150474" watchObservedRunningTime="2025-09-06 01:22:04.802597258 +0000 UTC m=+29.224780953" Sep 6 01:22:04.869456 kubelet[2426]: I0906 01:22:04.869400 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cwqnv" podStartSLOduration=22.869382244 podStartE2EDuration="22.869382244s" podCreationTimestamp="2025-09-06 01:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:22:04.825774039 +0000 UTC m=+29.247957734" watchObservedRunningTime="2025-09-06 01:22:04.869382244 +0000 UTC m=+29.291565899" Sep 6 01:22:52.458895 update_engine[1465]: I0906 01:22:52.458853 1465 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 6 01:22:52.458895 update_engine[1465]: I0906 01:22:52.458890 1465 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 6 01:22:52.459465 update_engine[1465]: I0906 01:22:52.459015 1465 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 6 01:22:52.459465 update_engine[1465]: I0906 01:22:52.459394 1465 omaha_request_params.cc:62] Current group set to lts Sep 6 01:22:52.459533 update_engine[1465]: I0906 01:22:52.459492 1465 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 6 01:22:52.459533 update_engine[1465]: I0906 01:22:52.459498 1465 update_attempter.cc:643] Scheduling an action processor start. Sep 6 01:22:52.459533 update_engine[1465]: I0906 01:22:52.459512 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 01:22:52.459643 update_engine[1465]: I0906 01:22:52.459538 1465 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 6 01:22:52.459964 locksmithd[1549]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 6 01:22:52.460144 update_engine[1465]: I0906 01:22:52.459988 1465 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 01:22:52.460144 update_engine[1465]: I0906 01:22:52.460000 1465 omaha_request_action.cc:271] Request: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: Sep 6 01:22:52.460144 update_engine[1465]: I0906 01:22:52.460004 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 01:22:52.528983 update_engine[1465]: I0906 01:22:52.528946 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 01:22:52.529198 update_engine[1465]: I0906 01:22:52.529177 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 01:22:52.617039 update_engine[1465]: E0906 01:22:52.617002 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 01:22:52.617167 update_engine[1465]: I0906 01:22:52.617106 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 6 01:23:02.415662 update_engine[1465]: I0906 01:23:02.415616 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 01:23:02.415978 update_engine[1465]: I0906 01:23:02.415812 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 01:23:02.416010 update_engine[1465]: I0906 01:23:02.415979 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 01:23:02.426507 update_engine[1465]: E0906 01:23:02.426479 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 01:23:02.426611 update_engine[1465]: I0906 01:23:02.426576 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 6 01:23:12.420400 update_engine[1465]: I0906 01:23:12.420313 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 01:23:12.420739 update_engine[1465]: I0906 01:23:12.420513 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 01:23:12.420739 update_engine[1465]: I0906 01:23:12.420696 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 01:23:12.459540 update_engine[1465]: E0906 01:23:12.459503 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 01:23:12.459644 update_engine[1465]: I0906 01:23:12.459606 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 6 01:23:22.419816 update_engine[1465]: I0906 01:23:22.419771 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 01:23:22.420193 update_engine[1465]: I0906 01:23:22.419971 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 01:23:22.420193 update_engine[1465]: I0906 01:23:22.420149 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 01:23:22.431756 update_engine[1465]: E0906 01:23:22.431725 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 01:23:22.431849 update_engine[1465]: I0906 01:23:22.431818 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 01:23:22.431849 update_engine[1465]: I0906 01:23:22.431825 1465 omaha_request_action.cc:621] Omaha request response: Sep 6 01:23:22.431913 update_engine[1465]: E0906 01:23:22.431895 1465 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 6 01:23:22.431940 update_engine[1465]: I0906 01:23:22.431912 1465 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 6 01:23:22.431940 update_engine[1465]: I0906 01:23:22.431915 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 01:23:22.431940 update_engine[1465]: I0906 01:23:22.431918 1465 update_attempter.cc:306] Processing Done. Sep 6 01:23:22.431940 update_engine[1465]: E0906 01:23:22.431931 1465 update_attempter.cc:619] Update failed. Sep 6 01:23:22.431940 update_engine[1465]: I0906 01:23:22.431933 1465 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 6 01:23:22.431940 update_engine[1465]: I0906 01:23:22.431936 1465 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 6 01:23:22.431940 update_engine[1465]: I0906 01:23:22.431939 1465 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 6 01:23:22.432094 update_engine[1465]: I0906 01:23:22.432000 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 6 01:23:22.432094 update_engine[1465]: I0906 01:23:22.432018 1465 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 6 01:23:22.432094 update_engine[1465]: I0906 01:23:22.432021 1465 omaha_request_action.cc:271] Request: Sep 6 01:23:22.432094 update_engine[1465]: Sep 6 01:23:22.432094 update_engine[1465]: Sep 6 01:23:22.432094 update_engine[1465]: Sep 6 01:23:22.432094 update_engine[1465]: Sep 6 01:23:22.432094 update_engine[1465]: Sep 6 01:23:22.432094 update_engine[1465]: Sep 6 01:23:22.432094 update_engine[1465]: I0906 01:23:22.432025 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 6 01:23:22.432550 update_engine[1465]: I0906 01:23:22.432137 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 6 01:23:22.432550 update_engine[1465]: I0906 01:23:22.432292 1465 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 6 01:23:22.432629 locksmithd[1549]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 6 01:23:22.500671 update_engine[1465]: E0906 01:23:22.500634 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500733 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500738 1465 omaha_request_action.cc:621] Omaha request response: Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500743 1465 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500746 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500749 1465 update_attempter.cc:306] Processing Done. Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500753 1465 update_attempter.cc:310] Error event sent. Sep 6 01:23:22.500853 update_engine[1465]: I0906 01:23:22.500762 1465 update_check_scheduler.cc:74] Next update check in 40m34s Sep 6 01:23:22.501128 locksmithd[1549]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 6 01:23:40.047013 systemd[1]: Started sshd@5-10.200.20.25:22-10.200.16.10:36358.service. Sep 6 01:23:40.460629 sshd[3778]: Accepted publickey for core from 10.200.16.10 port 36358 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:40.462439 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:40.467079 systemd[1]: Started session-8.scope. Sep 6 01:23:40.467446 systemd-logind[1463]: New session 8 of user core. Sep 6 01:23:40.861493 sshd[3778]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:40.865655 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:23:40.865801 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:23:40.866831 systemd-logind[1463]: Removed session 8. Sep 6 01:23:40.867081 systemd[1]: sshd@5-10.200.20.25:22-10.200.16.10:36358.service: Deactivated successfully. Sep 6 01:23:45.941097 systemd[1]: Started sshd@6-10.200.20.25:22-10.200.16.10:36366.service. Sep 6 01:23:46.399156 sshd[3792]: Accepted publickey for core from 10.200.16.10 port 36366 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:46.400973 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:46.405441 systemd[1]: Started session-9.scope. Sep 6 01:23:46.405766 systemd-logind[1463]: New session 9 of user core. Sep 6 01:23:46.808091 sshd[3792]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:46.811060 systemd[1]: sshd@6-10.200.20.25:22-10.200.16.10:36366.service: Deactivated successfully. Sep 6 01:23:46.811440 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:23:46.811813 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:23:46.812719 systemd-logind[1463]: Removed session 9. Sep 6 01:23:51.871456 systemd[1]: Started sshd@7-10.200.20.25:22-10.200.16.10:41180.service. Sep 6 01:23:52.284688 sshd[3805]: Accepted publickey for core from 10.200.16.10 port 41180 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:52.286515 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:52.290881 systemd[1]: Started session-10.scope. Sep 6 01:23:52.292173 systemd-logind[1463]: New session 10 of user core. Sep 6 01:23:52.659139 sshd[3805]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:52.662130 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:23:52.662139 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:23:52.662812 systemd[1]: sshd@7-10.200.20.25:22-10.200.16.10:41180.service: Deactivated successfully. Sep 6 01:23:52.663984 systemd-logind[1463]: Removed session 10. Sep 6 01:23:57.727991 systemd[1]: Started sshd@8-10.200.20.25:22-10.200.16.10:41184.service. Sep 6 01:23:58.140803 sshd[3817]: Accepted publickey for core from 10.200.16.10 port 41184 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:23:58.142627 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:23:58.147351 systemd[1]: Started session-11.scope. Sep 6 01:23:58.147975 systemd-logind[1463]: New session 11 of user core. Sep 6 01:23:58.530229 sshd[3817]: pam_unix(sshd:session): session closed for user core Sep 6 01:23:58.533862 systemd[1]: sshd@8-10.200.20.25:22-10.200.16.10:41184.service: Deactivated successfully. Sep 6 01:23:58.534632 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:23:58.535854 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:23:58.536574 systemd-logind[1463]: Removed session 11. Sep 6 01:24:03.599986 systemd[1]: Started sshd@9-10.200.20.25:22-10.200.16.10:40530.service. Sep 6 01:24:04.011178 sshd[3830]: Accepted publickey for core from 10.200.16.10 port 40530 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:04.012882 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:04.017217 systemd[1]: Started session-12.scope. Sep 6 01:24:04.017546 systemd-logind[1463]: New session 12 of user core. Sep 6 01:24:04.411992 sshd[3830]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:04.414778 systemd[1]: sshd@9-10.200.20.25:22-10.200.16.10:40530.service: Deactivated successfully. Sep 6 01:24:04.415832 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:24:04.416589 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:24:04.417462 systemd-logind[1463]: Removed session 12. Sep 6 01:24:04.481547 systemd[1]: Started sshd@10-10.200.20.25:22-10.200.16.10:40534.service. Sep 6 01:24:04.902629 sshd[3843]: Accepted publickey for core from 10.200.16.10 port 40534 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:04.903750 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:04.907959 systemd[1]: Started session-13.scope. Sep 6 01:24:04.909322 systemd-logind[1463]: New session 13 of user core. Sep 6 01:24:05.313386 sshd[3843]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:05.316217 systemd[1]: sshd@10-10.200.20.25:22-10.200.16.10:40534.service: Deactivated successfully. Sep 6 01:24:05.316949 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:24:05.317754 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:24:05.318662 systemd-logind[1463]: Removed session 13. Sep 6 01:24:05.394820 systemd[1]: Started sshd@11-10.200.20.25:22-10.200.16.10:40544.service. Sep 6 01:24:05.848485 sshd[3852]: Accepted publickey for core from 10.200.16.10 port 40544 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:05.850094 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:05.854474 systemd[1]: Started session-14.scope. Sep 6 01:24:05.855069 systemd-logind[1463]: New session 14 of user core. Sep 6 01:24:06.255490 sshd[3852]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:06.258624 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:24:06.259784 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:24:06.260589 systemd[1]: sshd@11-10.200.20.25:22-10.200.16.10:40544.service: Deactivated successfully. Sep 6 01:24:06.261851 systemd-logind[1463]: Removed session 14. Sep 6 01:24:11.317738 systemd[1]: Started sshd@12-10.200.20.25:22-10.200.16.10:54914.service. Sep 6 01:24:11.732206 sshd[3864]: Accepted publickey for core from 10.200.16.10 port 54914 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:11.733814 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:11.737942 systemd[1]: Started session-15.scope. Sep 6 01:24:11.738226 systemd-logind[1463]: New session 15 of user core. Sep 6 01:24:12.104764 sshd[3864]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:12.107123 systemd[1]: sshd@12-10.200.20.25:22-10.200.16.10:54914.service: Deactivated successfully. Sep 6 01:24:12.107853 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:24:12.108414 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:24:12.109239 systemd-logind[1463]: Removed session 15. Sep 6 01:24:17.187506 systemd[1]: Started sshd@13-10.200.20.25:22-10.200.16.10:54928.service. Sep 6 01:24:17.640311 sshd[3878]: Accepted publickey for core from 10.200.16.10 port 54928 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:17.641205 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:17.645909 systemd[1]: Started session-16.scope. Sep 6 01:24:17.645910 systemd-logind[1463]: New session 16 of user core. Sep 6 01:24:18.055827 sshd[3878]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:18.059117 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:24:18.059126 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:24:18.059733 systemd[1]: sshd@13-10.200.20.25:22-10.200.16.10:54928.service: Deactivated successfully. Sep 6 01:24:18.060785 systemd-logind[1463]: Removed session 16. Sep 6 01:24:18.124060 systemd[1]: Started sshd@14-10.200.20.25:22-10.200.16.10:54936.service. Sep 6 01:24:18.574824 sshd[3889]: Accepted publickey for core from 10.200.16.10 port 54936 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:18.576632 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:18.580122 systemd-logind[1463]: New session 17 of user core. Sep 6 01:24:18.582811 systemd[1]: Started session-17.scope. Sep 6 01:24:19.034030 sshd[3889]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:19.037302 systemd[1]: sshd@14-10.200.20.25:22-10.200.16.10:54936.service: Deactivated successfully. Sep 6 01:24:19.038076 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:24:19.038701 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:24:19.039725 systemd-logind[1463]: Removed session 17. Sep 6 01:24:19.129365 systemd[1]: Started sshd@15-10.200.20.25:22-10.200.16.10:54950.service. Sep 6 01:24:19.623778 sshd[3898]: Accepted publickey for core from 10.200.16.10 port 54950 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:19.625147 sshd[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:19.630412 systemd-logind[1463]: New session 18 of user core. Sep 6 01:24:19.630878 systemd[1]: Started session-18.scope. Sep 6 01:24:20.479209 sshd[3898]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:20.481816 systemd[1]: sshd@15-10.200.20.25:22-10.200.16.10:54950.service: Deactivated successfully. Sep 6 01:24:20.482546 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:24:20.483144 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:24:20.483979 systemd-logind[1463]: Removed session 18. Sep 6 01:24:20.552416 systemd[1]: Started sshd@16-10.200.20.25:22-10.200.16.10:41816.service. Sep 6 01:24:21.004107 sshd[3915]: Accepted publickey for core from 10.200.16.10 port 41816 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:21.005725 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:21.009801 systemd-logind[1463]: New session 19 of user core. Sep 6 01:24:21.010241 systemd[1]: Started session-19.scope. Sep 6 01:24:21.536750 sshd[3915]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:21.539329 systemd[1]: sshd@16-10.200.20.25:22-10.200.16.10:41816.service: Deactivated successfully. Sep 6 01:24:21.540057 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:24:21.540633 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:24:21.541671 systemd-logind[1463]: Removed session 19. Sep 6 01:24:21.626452 systemd[1]: Started sshd@17-10.200.20.25:22-10.200.16.10:41830.service. Sep 6 01:24:22.118673 sshd[3925]: Accepted publickey for core from 10.200.16.10 port 41830 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:22.120247 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:22.124513 systemd[1]: Started session-20.scope. Sep 6 01:24:22.125083 systemd-logind[1463]: New session 20 of user core. Sep 6 01:24:22.543119 sshd[3925]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:22.545681 systemd[1]: sshd@17-10.200.20.25:22-10.200.16.10:41830.service: Deactivated successfully. Sep 6 01:24:22.546431 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:24:22.546953 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:24:22.547930 systemd-logind[1463]: Removed session 20. Sep 6 01:24:27.612352 systemd[1]: Started sshd@18-10.200.20.25:22-10.200.16.10:41846.service. Sep 6 01:24:28.066533 sshd[3938]: Accepted publickey for core from 10.200.16.10 port 41846 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:28.067600 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:28.071894 systemd[1]: Started session-21.scope. Sep 6 01:24:28.072417 systemd-logind[1463]: New session 21 of user core. Sep 6 01:24:28.464783 sshd[3938]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:28.467832 systemd[1]: sshd@18-10.200.20.25:22-10.200.16.10:41846.service: Deactivated successfully. Sep 6 01:24:28.468035 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:24:28.468557 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:24:28.469416 systemd-logind[1463]: Removed session 21. Sep 6 01:24:33.527444 systemd[1]: Started sshd@19-10.200.20.25:22-10.200.16.10:48260.service. Sep 6 01:24:33.940635 sshd[3950]: Accepted publickey for core from 10.200.16.10 port 48260 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:33.942993 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:33.948362 systemd-logind[1463]: New session 22 of user core. Sep 6 01:24:33.949486 systemd[1]: Started session-22.scope. Sep 6 01:24:34.309303 sshd[3950]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:34.311794 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:24:34.312456 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:24:34.312609 systemd[1]: sshd@19-10.200.20.25:22-10.200.16.10:48260.service: Deactivated successfully. Sep 6 01:24:34.313701 systemd-logind[1463]: Removed session 22. Sep 6 01:24:39.379059 systemd[1]: Started sshd@20-10.200.20.25:22-10.200.16.10:48274.service. Sep 6 01:24:39.793328 sshd[3964]: Accepted publickey for core from 10.200.16.10 port 48274 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:39.795079 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:39.799265 systemd[1]: Started session-23.scope. Sep 6 01:24:39.800344 systemd-logind[1463]: New session 23 of user core. Sep 6 01:24:40.151703 sshd[3964]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:40.154219 systemd[1]: sshd@20-10.200.20.25:22-10.200.16.10:48274.service: Deactivated successfully. Sep 6 01:24:40.154976 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:24:40.155492 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:24:40.156159 systemd-logind[1463]: Removed session 23. Sep 6 01:24:40.220824 systemd[1]: Started sshd@21-10.200.20.25:22-10.200.16.10:49116.service. Sep 6 01:24:40.633906 sshd[3976]: Accepted publickey for core from 10.200.16.10 port 49116 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:40.635258 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:40.639372 systemd-logind[1463]: New session 24 of user core. Sep 6 01:24:40.639561 systemd[1]: Started session-24.scope. Sep 6 01:24:42.625687 systemd[1]: run-containerd-runc-k8s.io-c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4-runc.AWCmuG.mount: Deactivated successfully. Sep 6 01:24:42.637978 env[1472]: time="2025-09-06T01:24:42.637938122Z" level=info msg="StopContainer for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" with timeout 30 (s)" Sep 6 01:24:42.638776 env[1472]: time="2025-09-06T01:24:42.638748524Z" level=info msg="Stop container \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" with signal terminated" Sep 6 01:24:42.652235 env[1472]: time="2025-09-06T01:24:42.652176445Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:24:42.653448 systemd[1]: cri-containerd-b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3.scope: Deactivated successfully. Sep 6 01:24:42.663058 env[1472]: time="2025-09-06T01:24:42.663023118Z" level=info msg="StopContainer for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" with timeout 2 (s)" Sep 6 01:24:42.663533 env[1472]: time="2025-09-06T01:24:42.663511239Z" level=info msg="Stop container \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" with signal terminated" Sep 6 01:24:42.671333 systemd-networkd[1623]: lxc_health: Link DOWN Sep 6 01:24:42.671339 systemd-networkd[1623]: lxc_health: Lost carrier Sep 6 01:24:42.678782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3-rootfs.mount: Deactivated successfully. Sep 6 01:24:42.690564 systemd[1]: cri-containerd-c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4.scope: Deactivated successfully. Sep 6 01:24:42.690866 systemd[1]: cri-containerd-c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4.scope: Consumed 6.308s CPU time. Sep 6 01:24:42.710850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4-rootfs.mount: Deactivated successfully. Sep 6 01:24:42.739801 env[1472]: time="2025-09-06T01:24:42.739726350Z" level=info msg="shim disconnected" id=c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4 Sep 6 01:24:42.739801 env[1472]: time="2025-09-06T01:24:42.739790311Z" level=warning msg="cleaning up after shim disconnected" id=c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4 namespace=k8s.io Sep 6 01:24:42.739801 env[1472]: time="2025-09-06T01:24:42.739799951Z" level=info msg="cleaning up dead shim" Sep 6 01:24:42.740132 env[1472]: time="2025-09-06T01:24:42.739727670Z" level=info msg="shim disconnected" id=b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3 Sep 6 01:24:42.740232 env[1472]: time="2025-09-06T01:24:42.740215832Z" level=warning msg="cleaning up after shim disconnected" id=b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3 namespace=k8s.io Sep 6 01:24:42.740322 env[1472]: time="2025-09-06T01:24:42.740307832Z" level=info msg="cleaning up dead shim" Sep 6 01:24:42.748327 env[1472]: time="2025-09-06T01:24:42.748236336Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4044 runtime=io.containerd.runc.v2\n" Sep 6 01:24:42.748528 env[1472]: time="2025-09-06T01:24:42.748236296Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4043 runtime=io.containerd.runc.v2\n" Sep 6 01:24:42.755320 env[1472]: time="2025-09-06T01:24:42.755243197Z" level=info msg="StopContainer for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" returns successfully" Sep 6 01:24:42.756143 env[1472]: time="2025-09-06T01:24:42.756113160Z" level=info msg="StopPodSandbox for \"3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46\"" Sep 6 01:24:42.756494 env[1472]: time="2025-09-06T01:24:42.756456801Z" level=info msg="Container to stop \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:42.757567 env[1472]: time="2025-09-06T01:24:42.757539444Z" level=info msg="StopContainer for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" returns successfully" Sep 6 01:24:42.759172 env[1472]: time="2025-09-06T01:24:42.759143409Z" level=info msg="StopPodSandbox for \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\"" Sep 6 01:24:42.759464 env[1472]: time="2025-09-06T01:24:42.759429930Z" level=info msg="Container to stop \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:42.759574 env[1472]: time="2025-09-06T01:24:42.759554050Z" level=info msg="Container to stop \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:42.759660 env[1472]: time="2025-09-06T01:24:42.759643251Z" level=info msg="Container to stop \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:42.759730 env[1472]: time="2025-09-06T01:24:42.759714091Z" level=info msg="Container to stop \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:42.759798 env[1472]: time="2025-09-06T01:24:42.759778411Z" level=info msg="Container to stop \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:42.764997 systemd[1]: cri-containerd-3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46.scope: Deactivated successfully. Sep 6 01:24:42.765753 systemd[1]: cri-containerd-b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26.scope: Deactivated successfully. Sep 6 01:24:42.803682 env[1472]: time="2025-09-06T01:24:42.803592344Z" level=info msg="shim disconnected" id=3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46 Sep 6 01:24:42.803682 env[1472]: time="2025-09-06T01:24:42.803652024Z" level=warning msg="cleaning up after shim disconnected" id=3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46 namespace=k8s.io Sep 6 01:24:42.803682 env[1472]: time="2025-09-06T01:24:42.803661504Z" level=info msg="cleaning up dead shim" Sep 6 01:24:42.803889 env[1472]: time="2025-09-06T01:24:42.803854105Z" level=info msg="shim disconnected" id=b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26 Sep 6 01:24:42.803889 env[1472]: time="2025-09-06T01:24:42.803879265Z" level=warning msg="cleaning up after shim disconnected" id=b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26 namespace=k8s.io Sep 6 01:24:42.803889 env[1472]: time="2025-09-06T01:24:42.803886425Z" level=info msg="cleaning up dead shim" Sep 6 01:24:42.812132 env[1472]: time="2025-09-06T01:24:42.812092010Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4108 runtime=io.containerd.runc.v2\n" Sep 6 01:24:42.812580 env[1472]: time="2025-09-06T01:24:42.812551291Z" level=info msg="TearDown network for sandbox \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" successfully" Sep 6 01:24:42.812685 env[1472]: time="2025-09-06T01:24:42.812668851Z" level=info msg="StopPodSandbox for \"b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26\" returns successfully" Sep 6 01:24:42.812825 env[1472]: time="2025-09-06T01:24:42.812262570Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4107 runtime=io.containerd.runc.v2\n" Sep 6 01:24:42.814249 env[1472]: time="2025-09-06T01:24:42.814218776Z" level=info msg="TearDown network for sandbox \"3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46\" successfully" Sep 6 01:24:42.814380 env[1472]: time="2025-09-06T01:24:42.814360777Z" level=info msg="StopPodSandbox for \"3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46\" returns successfully" Sep 6 01:24:42.915573 kubelet[2426]: I0906 01:24:42.915369 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-hostproc\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.915573 kubelet[2426]: I0906 01:24:42.915490 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-run\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.915573 kubelet[2426]: I0906 01:24:42.915511 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-kernel\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.915573 kubelet[2426]: I0906 01:24:42.915451 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-hostproc" (OuterVolumeSpecName: "hostproc") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.916501 kubelet[2426]: I0906 01:24:42.916476 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.916571 kubelet[2426]: I0906 01:24:42.916516 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-hubble-tls\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916571 kubelet[2426]: I0906 01:24:42.916536 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qz89\" (UniqueName: \"kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-kube-api-access-4qz89\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916845 kubelet[2426]: I0906 01:24:42.916827 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cni-path\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916934 kubelet[2426]: I0906 01:24:42.916856 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-lib-modules\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916934 kubelet[2426]: I0906 01:24:42.916875 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/929a9d46-8e13-4d36-a4b6-93822e1ec811-clustermesh-secrets\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916934 kubelet[2426]: I0906 01:24:42.916889 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-etc-cni-netd\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916934 kubelet[2426]: I0906 01:24:42.916908 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-xtables-lock\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.916934 kubelet[2426]: I0906 01:24:42.916924 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsz88\" (UniqueName: \"kubernetes.io/projected/ae440b46-1442-4068-ad0a-06eb6db20fff-kube-api-access-qsz88\") pod \"ae440b46-1442-4068-ad0a-06eb6db20fff\" (UID: \"ae440b46-1442-4068-ad0a-06eb6db20fff\") " Sep 6 01:24:42.917052 kubelet[2426]: I0906 01:24:42.916942 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-config-path\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.917052 kubelet[2426]: I0906 01:24:42.916957 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-net\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.917052 kubelet[2426]: I0906 01:24:42.916970 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-bpf-maps\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.917052 kubelet[2426]: I0906 01:24:42.916985 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae440b46-1442-4068-ad0a-06eb6db20fff-cilium-config-path\") pod \"ae440b46-1442-4068-ad0a-06eb6db20fff\" (UID: \"ae440b46-1442-4068-ad0a-06eb6db20fff\") " Sep 6 01:24:42.917052 kubelet[2426]: I0906 01:24:42.917001 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-cgroup\") pod \"929a9d46-8e13-4d36-a4b6-93822e1ec811\" (UID: \"929a9d46-8e13-4d36-a4b6-93822e1ec811\") " Sep 6 01:24:42.917052 kubelet[2426]: I0906 01:24:42.917041 2426 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-hostproc\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:42.917180 kubelet[2426]: I0906 01:24:42.917052 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-run\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:42.917180 kubelet[2426]: I0906 01:24:42.917081 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.919787 kubelet[2426]: I0906 01:24:42.919750 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:42.919885 kubelet[2426]: I0906 01:24:42.919801 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.919885 kubelet[2426]: I0906 01:24:42.919821 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.919885 kubelet[2426]: I0906 01:24:42.919835 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cni-path" (OuterVolumeSpecName: "cni-path") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.919885 kubelet[2426]: I0906 01:24:42.919850 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.920178 kubelet[2426]: I0906 01:24:42.920140 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-kube-api-access-4qz89" (OuterVolumeSpecName: "kube-api-access-4qz89") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "kube-api-access-4qz89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:42.922637 kubelet[2426]: I0906 01:24:42.922603 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929a9d46-8e13-4d36-a4b6-93822e1ec811-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:42.922724 kubelet[2426]: I0906 01:24:42.922654 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.922724 kubelet[2426]: I0906 01:24:42.922674 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.924410 kubelet[2426]: I0906 01:24:42.924386 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae440b46-1442-4068-ad0a-06eb6db20fff-kube-api-access-qsz88" (OuterVolumeSpecName: "kube-api-access-qsz88") pod "ae440b46-1442-4068-ad0a-06eb6db20fff" (UID: "ae440b46-1442-4068-ad0a-06eb6db20fff"). InnerVolumeSpecName "kube-api-access-qsz88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:42.924463 kubelet[2426]: I0906 01:24:42.924423 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:42.926121 kubelet[2426]: I0906 01:24:42.926098 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae440b46-1442-4068-ad0a-06eb6db20fff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae440b46-1442-4068-ad0a-06eb6db20fff" (UID: "ae440b46-1442-4068-ad0a-06eb6db20fff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:42.926446 kubelet[2426]: I0906 01:24:42.926426 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "929a9d46-8e13-4d36-a4b6-93822e1ec811" (UID: "929a9d46-8e13-4d36-a4b6-93822e1ec811"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:43.017731 kubelet[2426]: I0906 01:24:43.017689 2426 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-etc-cni-netd\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017731 kubelet[2426]: I0906 01:24:43.017728 2426 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-xtables-lock\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017731 kubelet[2426]: I0906 01:24:43.017739 2426 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qsz88\" (UniqueName: \"kubernetes.io/projected/ae440b46-1442-4068-ad0a-06eb6db20fff-kube-api-access-qsz88\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017750 2426 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-net\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017771 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-config-path\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017781 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cilium-cgroup\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017789 2426 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-bpf-maps\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017797 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae440b46-1442-4068-ad0a-06eb6db20fff-cilium-config-path\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017806 2426 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-cni-path\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017815 2426 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.017922 kubelet[2426]: I0906 01:24:43.017827 2426 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-hubble-tls\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.018098 kubelet[2426]: I0906 01:24:43.017835 2426 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qz89\" (UniqueName: \"kubernetes.io/projected/929a9d46-8e13-4d36-a4b6-93822e1ec811-kube-api-access-4qz89\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.018098 kubelet[2426]: I0906 01:24:43.017844 2426 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/929a9d46-8e13-4d36-a4b6-93822e1ec811-lib-modules\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.018098 kubelet[2426]: I0906 01:24:43.017852 2426 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/929a9d46-8e13-4d36-a4b6-93822e1ec811-clustermesh-secrets\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:43.061641 kubelet[2426]: I0906 01:24:43.061614 2426 scope.go:117] "RemoveContainer" containerID="b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3" Sep 6 01:24:43.064246 env[1472]: time="2025-09-06T01:24:43.063938171Z" level=info msg="RemoveContainer for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\"" Sep 6 01:24:43.068609 systemd[1]: Removed slice kubepods-besteffort-podae440b46_1442_4068_ad0a_06eb6db20fff.slice. Sep 6 01:24:43.074004 systemd[1]: Removed slice kubepods-burstable-pod929a9d46_8e13_4d36_a4b6_93822e1ec811.slice. Sep 6 01:24:43.074081 systemd[1]: kubepods-burstable-pod929a9d46_8e13_4d36_a4b6_93822e1ec811.slice: Consumed 6.400s CPU time. Sep 6 01:24:43.075298 env[1472]: time="2025-09-06T01:24:43.075155004Z" level=info msg="RemoveContainer for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" returns successfully" Sep 6 01:24:43.075758 kubelet[2426]: I0906 01:24:43.075570 2426 scope.go:117] "RemoveContainer" containerID="b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3" Sep 6 01:24:43.076113 env[1472]: time="2025-09-06T01:24:43.075998647Z" level=error msg="ContainerStatus for \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\": not found" Sep 6 01:24:43.077593 kubelet[2426]: E0906 01:24:43.077569 2426 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\": not found" containerID="b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3" Sep 6 01:24:43.077809 kubelet[2426]: I0906 01:24:43.077695 2426 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3"} err="failed to get container status \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3e6a03c45df6f176685824ad558146491df125cd58c9abe1cacc94f90a53ba3\": not found" Sep 6 01:24:43.077886 kubelet[2426]: I0906 01:24:43.077873 2426 scope.go:117] "RemoveContainer" containerID="c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4" Sep 6 01:24:43.080015 env[1472]: time="2025-09-06T01:24:43.079738018Z" level=info msg="RemoveContainer for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\"" Sep 6 01:24:43.087253 env[1472]: time="2025-09-06T01:24:43.087176240Z" level=info msg="RemoveContainer for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" returns successfully" Sep 6 01:24:43.087471 kubelet[2426]: I0906 01:24:43.087454 2426 scope.go:117] "RemoveContainer" containerID="8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e" Sep 6 01:24:43.088398 env[1472]: time="2025-09-06T01:24:43.088365124Z" level=info msg="RemoveContainer for \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\"" Sep 6 01:24:43.100548 env[1472]: time="2025-09-06T01:24:43.100328200Z" level=info msg="RemoveContainer for \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\" returns successfully" Sep 6 01:24:43.100884 kubelet[2426]: I0906 01:24:43.100849 2426 scope.go:117] "RemoveContainer" containerID="9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634" Sep 6 01:24:43.102243 env[1472]: time="2025-09-06T01:24:43.101980965Z" level=info msg="RemoveContainer for \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\"" Sep 6 01:24:43.110491 env[1472]: time="2025-09-06T01:24:43.110452350Z" level=info msg="RemoveContainer for \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\" returns successfully" Sep 6 01:24:43.110768 kubelet[2426]: I0906 01:24:43.110740 2426 scope.go:117] "RemoveContainer" containerID="1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d" Sep 6 01:24:43.111843 env[1472]: time="2025-09-06T01:24:43.111814954Z" level=info msg="RemoveContainer for \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\"" Sep 6 01:24:43.120138 env[1472]: time="2025-09-06T01:24:43.120100859Z" level=info msg="RemoveContainer for \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\" returns successfully" Sep 6 01:24:43.120444 kubelet[2426]: I0906 01:24:43.120419 2426 scope.go:117] "RemoveContainer" containerID="e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e" Sep 6 01:24:43.123511 env[1472]: time="2025-09-06T01:24:43.123475429Z" level=info msg="RemoveContainer for \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\"" Sep 6 01:24:43.131687 env[1472]: time="2025-09-06T01:24:43.131653333Z" level=info msg="RemoveContainer for \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\" returns successfully" Sep 6 01:24:43.131996 kubelet[2426]: I0906 01:24:43.131973 2426 scope.go:117] "RemoveContainer" containerID="c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4" Sep 6 01:24:43.132213 env[1472]: time="2025-09-06T01:24:43.132156735Z" level=error msg="ContainerStatus for \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\": not found" Sep 6 01:24:43.132377 kubelet[2426]: E0906 01:24:43.132357 2426 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\": not found" containerID="c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4" Sep 6 01:24:43.132475 kubelet[2426]: I0906 01:24:43.132452 2426 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4"} err="failed to get container status \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0f6ded5ade6897ce7aaf5accfd8481f240f7686c3d2b1ffc480b3dca2b409c4\": not found" Sep 6 01:24:43.132552 kubelet[2426]: I0906 01:24:43.132539 2426 scope.go:117] "RemoveContainer" containerID="8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e" Sep 6 01:24:43.132788 env[1472]: time="2025-09-06T01:24:43.132741777Z" level=error msg="ContainerStatus for \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\": not found" Sep 6 01:24:43.132928 kubelet[2426]: E0906 01:24:43.132900 2426 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\": not found" containerID="8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e" Sep 6 01:24:43.132968 kubelet[2426]: I0906 01:24:43.132937 2426 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e"} err="failed to get container status \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ea2da3e097655458e89ac759c2a85e862923e39088a3f041f7d89d9407f588e\": not found" Sep 6 01:24:43.132968 kubelet[2426]: I0906 01:24:43.132955 2426 scope.go:117] "RemoveContainer" containerID="9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634" Sep 6 01:24:43.133141 env[1472]: time="2025-09-06T01:24:43.133092298Z" level=error msg="ContainerStatus for \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\": not found" Sep 6 01:24:43.133267 kubelet[2426]: E0906 01:24:43.133250 2426 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\": not found" containerID="9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634" Sep 6 01:24:43.133398 kubelet[2426]: I0906 01:24:43.133379 2426 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634"} err="failed to get container status \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f38d2dfd4e15fca3c7a20d62611aa38d94a3b1c0032e965170001d63c164634\": not found" Sep 6 01:24:43.133461 kubelet[2426]: I0906 01:24:43.133450 2426 scope.go:117] "RemoveContainer" containerID="1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d" Sep 6 01:24:43.133707 env[1472]: time="2025-09-06T01:24:43.133658579Z" level=error msg="ContainerStatus for \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\": not found" Sep 6 01:24:43.133843 kubelet[2426]: E0906 01:24:43.133820 2426 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\": not found" containerID="1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d" Sep 6 01:24:43.133889 kubelet[2426]: I0906 01:24:43.133855 2426 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d"} err="failed to get container status \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e1fbb73bf38b36365d8991ce793a5c6d240572e7df42ff84b9cfffdc8738a9d\": not found" Sep 6 01:24:43.133889 kubelet[2426]: I0906 01:24:43.133871 2426 scope.go:117] "RemoveContainer" containerID="e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e" Sep 6 01:24:43.134050 env[1472]: time="2025-09-06T01:24:43.134000220Z" level=error msg="ContainerStatus for \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\": not found" Sep 6 01:24:43.134172 kubelet[2426]: E0906 01:24:43.134156 2426 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\": not found" containerID="e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e" Sep 6 01:24:43.134257 kubelet[2426]: I0906 01:24:43.134239 2426 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e"} err="failed to get container status \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5bdc86e49fdb5347ec6e02454548814d2b906363c39d4275d789b9b7bdc885e\": not found" Sep 6 01:24:43.616060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26-rootfs.mount: Deactivated successfully. Sep 6 01:24:43.616144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3566457a523075198c069b01f975de36ff6ba26b3fb20c68931829f53e21e26-shm.mount: Deactivated successfully. Sep 6 01:24:43.616210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46-rootfs.mount: Deactivated successfully. Sep 6 01:24:43.616255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b3918c6979a2938a5f40708b75b00e43cbd39003da3e242899af87dc05cfa46-shm.mount: Deactivated successfully. Sep 6 01:24:43.616337 systemd[1]: var-lib-kubelet-pods-929a9d46\x2d8e13\x2d4d36\x2da4b6\x2d93822e1ec811-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4qz89.mount: Deactivated successfully. Sep 6 01:24:43.616389 systemd[1]: var-lib-kubelet-pods-ae440b46\x2d1442\x2d4068\x2dad0a\x2d06eb6db20fff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqsz88.mount: Deactivated successfully. Sep 6 01:24:43.616438 systemd[1]: var-lib-kubelet-pods-929a9d46\x2d8e13\x2d4d36\x2da4b6\x2d93822e1ec811-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:24:43.616485 systemd[1]: var-lib-kubelet-pods-929a9d46\x2d8e13\x2d4d36\x2da4b6\x2d93822e1ec811-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:43.660214 kubelet[2426]: I0906 01:24:43.660173 2426 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929a9d46-8e13-4d36-a4b6-93822e1ec811" path="/var/lib/kubelet/pods/929a9d46-8e13-4d36-a4b6-93822e1ec811/volumes" Sep 6 01:24:43.660786 kubelet[2426]: I0906 01:24:43.660761 2426 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae440b46-1442-4068-ad0a-06eb6db20fff" path="/var/lib/kubelet/pods/ae440b46-1442-4068-ad0a-06eb6db20fff/volumes" Sep 6 01:24:44.638351 sshd[3976]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:44.641034 systemd[1]: sshd@21-10.200.20.25:22-10.200.16.10:49116.service: Deactivated successfully. Sep 6 01:24:44.641755 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:24:44.641905 systemd[1]: session-24.scope: Consumed 1.097s CPU time. Sep 6 01:24:44.642318 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:24:44.643354 systemd-logind[1463]: Removed session 24. Sep 6 01:24:44.707314 systemd[1]: Started sshd@22-10.200.20.25:22-10.200.16.10:49124.service. Sep 6 01:24:45.121887 sshd[4142]: Accepted publickey for core from 10.200.16.10 port 49124 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:45.123140 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:45.127630 systemd[1]: Started session-25.scope. Sep 6 01:24:45.127916 systemd-logind[1463]: New session 25 of user core. Sep 6 01:24:45.779872 kubelet[2426]: E0906 01:24:45.779824 2426 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:24:46.319366 kubelet[2426]: I0906 01:24:46.319324 2426 memory_manager.go:355] "RemoveStaleState removing state" podUID="929a9d46-8e13-4d36-a4b6-93822e1ec811" containerName="cilium-agent" Sep 6 01:24:46.319366 kubelet[2426]: I0906 01:24:46.319352 2426 memory_manager.go:355] "RemoveStaleState removing state" podUID="ae440b46-1442-4068-ad0a-06eb6db20fff" containerName="cilium-operator" Sep 6 01:24:46.324907 systemd[1]: Created slice kubepods-burstable-pod83f72a04_3ab8_4527_b04c_9761fa268e81.slice. Sep 6 01:24:46.339845 sshd[4142]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:46.342957 systemd[1]: sshd@22-10.200.20.25:22-10.200.16.10:49124.service: Deactivated successfully. Sep 6 01:24:46.343701 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 01:24:46.344023 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Sep 6 01:24:46.346606 systemd-logind[1463]: Removed session 25. Sep 6 01:24:46.421823 systemd[1]: Started sshd@23-10.200.20.25:22-10.200.16.10:49136.service. Sep 6 01:24:46.435340 kubelet[2426]: I0906 01:24:46.435309 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-bpf-maps\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.435534 kubelet[2426]: I0906 01:24:46.435519 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-xtables-lock\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.435651 kubelet[2426]: I0906 01:24:46.435638 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-hostproc\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.435753 kubelet[2426]: I0906 01:24:46.435741 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-clustermesh-secrets\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.435854 kubelet[2426]: I0906 01:24:46.435842 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-net\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.435970 kubelet[2426]: I0906 01:24:46.435955 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-run\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436081 kubelet[2426]: I0906 01:24:46.436065 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-ipsec-secrets\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436202 kubelet[2426]: I0906 01:24:46.436189 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-cgroup\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436334 kubelet[2426]: I0906 01:24:46.436315 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-lib-modules\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436443 kubelet[2426]: I0906 01:24:46.436431 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-hubble-tls\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436549 kubelet[2426]: I0906 01:24:46.436537 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-etc-cni-netd\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436663 kubelet[2426]: I0906 01:24:46.436643 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-kernel\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436764 kubelet[2426]: I0906 01:24:46.436752 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cni-path\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436857 kubelet[2426]: I0906 01:24:46.436843 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-config-path\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.436955 kubelet[2426]: I0906 01:24:46.436943 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzk6\" (UniqueName: \"kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-kube-api-access-vvzk6\") pod \"cilium-d8bpr\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " pod="kube-system/cilium-d8bpr" Sep 6 01:24:46.627796 env[1472]: time="2025-09-06T01:24:46.627257265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8bpr,Uid:83f72a04-3ab8-4527-b04c-9761fa268e81,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:46.660222 env[1472]: time="2025-09-06T01:24:46.660135320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:46.660407 env[1472]: time="2025-09-06T01:24:46.660224281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:46.660407 env[1472]: time="2025-09-06T01:24:46.660250401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:46.660560 env[1472]: time="2025-09-06T01:24:46.660515121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374 pid=4166 runtime=io.containerd.runc.v2 Sep 6 01:24:46.670890 systemd[1]: Started cri-containerd-4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374.scope. Sep 6 01:24:46.696538 env[1472]: time="2025-09-06T01:24:46.696496865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8bpr,Uid:83f72a04-3ab8-4527-b04c-9761fa268e81,Namespace:kube-system,Attempt:0,} returns sandbox id \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\"" Sep 6 01:24:46.700996 env[1472]: time="2025-09-06T01:24:46.700951278Z" level=info msg="CreateContainer within sandbox \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:24:46.733579 env[1472]: time="2025-09-06T01:24:46.733526412Z" level=info msg="CreateContainer within sandbox \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\"" Sep 6 01:24:46.734760 env[1472]: time="2025-09-06T01:24:46.734727056Z" level=info msg="StartContainer for \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\"" Sep 6 01:24:46.750864 systemd[1]: Started cri-containerd-023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f.scope. Sep 6 01:24:46.764673 systemd[1]: cri-containerd-023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f.scope: Deactivated successfully. Sep 6 01:24:46.835247 env[1472]: time="2025-09-06T01:24:46.835184066Z" level=info msg="shim disconnected" id=023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f Sep 6 01:24:46.835247 env[1472]: time="2025-09-06T01:24:46.835239106Z" level=warning msg="cleaning up after shim disconnected" id=023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f namespace=k8s.io Sep 6 01:24:46.835247 env[1472]: time="2025-09-06T01:24:46.835248946Z" level=info msg="cleaning up dead shim" Sep 6 01:24:46.842130 env[1472]: time="2025-09-06T01:24:46.842070406Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4225 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T01:24:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 01:24:46.842590 env[1472]: time="2025-09-06T01:24:46.842464607Z" level=error msg="copy shim log" error="read /proc/self/fd/33: file already closed" Sep 6 01:24:46.842848 env[1472]: time="2025-09-06T01:24:46.842805888Z" level=error msg="Failed to pipe stderr of container \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\"" error="reading from a closed fifo" Sep 6 01:24:46.842932 env[1472]: time="2025-09-06T01:24:46.842883888Z" level=error msg="Failed to pipe stdout of container \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\"" error="reading from a closed fifo" Sep 6 01:24:46.848784 env[1472]: time="2025-09-06T01:24:46.848710785Z" level=error msg="StartContainer for \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 01:24:46.849190 kubelet[2426]: E0906 01:24:46.849001 2426 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f" Sep 6 01:24:46.849190 kubelet[2426]: E0906 01:24:46.849147 2426 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 01:24:46.849190 kubelet[2426]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 01:24:46.849190 kubelet[2426]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 01:24:46.849190 kubelet[2426]: rm /hostbin/cilium-mount Sep 6 01:24:46.851062 kubelet[2426]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-d8bpr_kube-system(83f72a04-3ab8-4527-b04c-9761fa268e81): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 01:24:46.851062 kubelet[2426]: > logger="UnhandledError" Sep 6 01:24:46.851165 kubelet[2426]: E0906 01:24:46.850237 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-d8bpr" podUID="83f72a04-3ab8-4527-b04c-9761fa268e81" Sep 6 01:24:46.876771 sshd[4152]: Accepted publickey for core from 10.200.16.10 port 49136 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:46.877420 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:46.881740 systemd[1]: Started session-26.scope. Sep 6 01:24:46.882038 systemd-logind[1463]: New session 26 of user core. Sep 6 01:24:47.087195 env[1472]: time="2025-09-06T01:24:47.082865738Z" level=info msg="CreateContainer within sandbox \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Sep 6 01:24:47.119646 env[1472]: time="2025-09-06T01:24:47.119597723Z" level=info msg="CreateContainer within sandbox \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\"" Sep 6 01:24:47.120667 env[1472]: time="2025-09-06T01:24:47.120636046Z" level=info msg="StartContainer for \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\"" Sep 6 01:24:47.141555 systemd[1]: Started cri-containerd-dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e.scope. Sep 6 01:24:47.152318 systemd[1]: cri-containerd-dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e.scope: Deactivated successfully. Sep 6 01:24:47.152573 systemd[1]: Stopped cri-containerd-dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e.scope. Sep 6 01:24:47.174527 env[1472]: time="2025-09-06T01:24:47.174471200Z" level=info msg="shim disconnected" id=dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e Sep 6 01:24:47.174527 env[1472]: time="2025-09-06T01:24:47.174525480Z" level=warning msg="cleaning up after shim disconnected" id=dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e namespace=k8s.io Sep 6 01:24:47.174527 env[1472]: time="2025-09-06T01:24:47.174535280Z" level=info msg="cleaning up dead shim" Sep 6 01:24:47.181054 env[1472]: time="2025-09-06T01:24:47.181010418Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4267 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T01:24:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 01:24:47.181455 env[1472]: time="2025-09-06T01:24:47.181403739Z" level=error msg="copy shim log" error="read /proc/self/fd/33: file already closed" Sep 6 01:24:47.181954 env[1472]: time="2025-09-06T01:24:47.181694220Z" level=error msg="Failed to pipe stdout of container \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\"" error="reading from a closed fifo" Sep 6 01:24:47.182077 env[1472]: time="2025-09-06T01:24:47.181741060Z" level=error msg="Failed to pipe stderr of container \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\"" error="reading from a closed fifo" Sep 6 01:24:47.186218 env[1472]: time="2025-09-06T01:24:47.186176993Z" level=error msg="StartContainer for \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 01:24:47.187016 kubelet[2426]: E0906 01:24:47.186559 2426 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e" Sep 6 01:24:47.187016 kubelet[2426]: E0906 01:24:47.186698 2426 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 01:24:47.187016 kubelet[2426]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 01:24:47.187016 kubelet[2426]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 01:24:47.187016 kubelet[2426]: rm /hostbin/cilium-mount Sep 6 01:24:47.187215 kubelet[2426]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-d8bpr_kube-system(83f72a04-3ab8-4527-b04c-9761fa268e81): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 01:24:47.187215 kubelet[2426]: > logger="UnhandledError" Sep 6 01:24:47.188157 kubelet[2426]: E0906 01:24:47.188105 2426 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-d8bpr" podUID="83f72a04-3ab8-4527-b04c-9761fa268e81" Sep 6 01:24:47.318518 sshd[4152]: pam_unix(sshd:session): session closed for user core Sep 6 01:24:47.321124 systemd[1]: sshd@23-10.200.20.25:22-10.200.16.10:49136.service: Deactivated successfully. Sep 6 01:24:47.321847 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 01:24:47.322380 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Sep 6 01:24:47.323232 systemd-logind[1463]: Removed session 26. Sep 6 01:24:47.400293 systemd[1]: Started sshd@24-10.200.20.25:22-10.200.16.10:49152.service. Sep 6 01:24:47.892796 sshd[4283]: Accepted publickey for core from 10.200.16.10 port 49152 ssh2: RSA SHA256:61uHVL+Uw1UCTgOuoaZ58b8YSngF6bjnT9fiLryAt68 Sep 6 01:24:47.894221 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:24:47.899232 systemd[1]: Started session-27.scope. Sep 6 01:24:47.899974 systemd-logind[1463]: New session 27 of user core. Sep 6 01:24:48.080605 kubelet[2426]: I0906 01:24:48.080546 2426 scope.go:117] "RemoveContainer" containerID="023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f" Sep 6 01:24:48.081092 env[1472]: time="2025-09-06T01:24:48.081049944Z" level=info msg="StopPodSandbox for \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\"" Sep 6 01:24:48.081456 env[1472]: time="2025-09-06T01:24:48.081113024Z" level=info msg="Container to stop \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:48.081456 env[1472]: time="2025-09-06T01:24:48.081130224Z" level=info msg="Container to stop \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:24:48.084073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374-shm.mount: Deactivated successfully. Sep 6 01:24:48.091873 env[1472]: time="2025-09-06T01:24:48.091835934Z" level=info msg="RemoveContainer for \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\"" Sep 6 01:24:48.093579 systemd[1]: cri-containerd-4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374.scope: Deactivated successfully. Sep 6 01:24:48.102871 env[1472]: time="2025-09-06T01:24:48.102829005Z" level=info msg="RemoveContainer for \"023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f\" returns successfully" Sep 6 01:24:48.119547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374-rootfs.mount: Deactivated successfully. Sep 6 01:24:48.139959 env[1472]: time="2025-09-06T01:24:48.139775829Z" level=info msg="shim disconnected" id=4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374 Sep 6 01:24:48.139959 env[1472]: time="2025-09-06T01:24:48.139945070Z" level=warning msg="cleaning up after shim disconnected" id=4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374 namespace=k8s.io Sep 6 01:24:48.139959 env[1472]: time="2025-09-06T01:24:48.139955910Z" level=info msg="cleaning up dead shim" Sep 6 01:24:48.148698 env[1472]: time="2025-09-06T01:24:48.147604531Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4306 runtime=io.containerd.runc.v2\n" Sep 6 01:24:48.148698 env[1472]: time="2025-09-06T01:24:48.147892692Z" level=info msg="TearDown network for sandbox \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\" successfully" Sep 6 01:24:48.148698 env[1472]: time="2025-09-06T01:24:48.147914492Z" level=info msg="StopPodSandbox for \"4291501f057bb253f4b08c8f85baaa1be811c1aa2bb87c79c1fd6bc9ad6d2374\" returns successfully" Sep 6 01:24:48.249217 kubelet[2426]: I0906 01:24:48.249180 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-run\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249485 kubelet[2426]: I0906 01:24:48.249437 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-lib-modules\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249485 kubelet[2426]: I0906 01:24:48.249465 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-etc-cni-netd\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249564 kubelet[2426]: I0906 01:24:48.249300 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.249564 kubelet[2426]: I0906 01:24:48.249517 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.249613 kubelet[2426]: I0906 01:24:48.249599 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.249802 kubelet[2426]: I0906 01:24:48.249670 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-hostproc\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249802 kubelet[2426]: I0906 01:24:48.249702 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-ipsec-secrets\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249802 kubelet[2426]: I0906 01:24:48.249732 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-kernel\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249802 kubelet[2426]: I0906 01:24:48.249736 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-hostproc" (OuterVolumeSpecName: "hostproc") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.249802 kubelet[2426]: I0906 01:24:48.249751 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-bpf-maps\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249802 kubelet[2426]: I0906 01:24:48.249766 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-xtables-lock\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.249969 kubelet[2426]: I0906 01:24:48.249784 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-config-path\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250166 kubelet[2426]: I0906 01:24:48.250016 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-cgroup\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250166 kubelet[2426]: I0906 01:24:48.250044 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-clustermesh-secrets\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250166 kubelet[2426]: I0906 01:24:48.250060 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-hubble-tls\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250166 kubelet[2426]: I0906 01:24:48.250092 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-net\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250166 kubelet[2426]: I0906 01:24:48.250112 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cni-path\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250166 kubelet[2426]: I0906 01:24:48.250130 2426 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvzk6\" (UniqueName: \"kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-kube-api-access-vvzk6\") pod \"83f72a04-3ab8-4527-b04c-9761fa268e81\" (UID: \"83f72a04-3ab8-4527-b04c-9761fa268e81\") " Sep 6 01:24:48.250451 kubelet[2426]: I0906 01:24:48.250393 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-run\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.250451 kubelet[2426]: I0906 01:24:48.250414 2426 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-lib-modules\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.250451 kubelet[2426]: I0906 01:24:48.250423 2426 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-etc-cni-netd\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.250451 kubelet[2426]: I0906 01:24:48.250431 2426 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-hostproc\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.250703 kubelet[2426]: I0906 01:24:48.250665 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.250749 kubelet[2426]: I0906 01:24:48.250704 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.250749 kubelet[2426]: I0906 01:24:48.250721 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.250749 kubelet[2426]: I0906 01:24:48.250735 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.252592 kubelet[2426]: I0906 01:24:48.252552 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 01:24:48.252684 kubelet[2426]: I0906 01:24:48.252611 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.255435 systemd[1]: var-lib-kubelet-pods-83f72a04\x2d3ab8\x2d4527\x2db04c\x2d9761fa268e81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvzk6.mount: Deactivated successfully. Sep 6 01:24:48.257490 systemd[1]: var-lib-kubelet-pods-83f72a04\x2d3ab8\x2d4527\x2db04c\x2d9761fa268e81-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:48.259453 kubelet[2426]: I0906 01:24:48.259420 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cni-path" (OuterVolumeSpecName: "cni-path") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 01:24:48.259567 kubelet[2426]: I0906 01:24:48.259543 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:48.260474 kubelet[2426]: I0906 01:24:48.260437 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-kube-api-access-vvzk6" (OuterVolumeSpecName: "kube-api-access-vvzk6") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "kube-api-access-vvzk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:48.260888 kubelet[2426]: I0906 01:24:48.260864 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 01:24:48.262565 kubelet[2426]: I0906 01:24:48.262531 2426 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83f72a04-3ab8-4527-b04c-9761fa268e81" (UID: "83f72a04-3ab8-4527-b04c-9761fa268e81"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351207 2426 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cni-path\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351237 2426 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvzk6\" (UniqueName: \"kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-kube-api-access-vvzk6\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351251 2426 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351260 2426 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-bpf-maps\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351286 2426 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-xtables-lock\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351296 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351306 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-config-path\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351459 kubelet[2426]: I0906 01:24:48.351314 2426 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-cilium-cgroup\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351794 kubelet[2426]: I0906 01:24:48.351323 2426 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83f72a04-3ab8-4527-b04c-9761fa268e81-clustermesh-secrets\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351794 kubelet[2426]: I0906 01:24:48.351333 2426 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83f72a04-3ab8-4527-b04c-9761fa268e81-hubble-tls\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.351794 kubelet[2426]: I0906 01:24:48.351342 2426 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83f72a04-3ab8-4527-b04c-9761fa268e81-host-proc-sys-net\") on node \"ci-3510.3.8-n-dced7724bc\" DevicePath \"\"" Sep 6 01:24:48.543201 systemd[1]: var-lib-kubelet-pods-83f72a04\x2d3ab8\x2d4527\x2db04c\x2d9761fa268e81-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:24:48.543314 systemd[1]: var-lib-kubelet-pods-83f72a04\x2d3ab8\x2d4527\x2db04c\x2d9761fa268e81-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:24:49.084202 kubelet[2426]: I0906 01:24:49.084169 2426 scope.go:117] "RemoveContainer" containerID="dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e" Sep 6 01:24:49.086227 env[1472]: time="2025-09-06T01:24:49.086196215Z" level=info msg="RemoveContainer for \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\"" Sep 6 01:24:49.088224 systemd[1]: Removed slice kubepods-burstable-pod83f72a04_3ab8_4527_b04c_9761fa268e81.slice. Sep 6 01:24:49.094235 env[1472]: time="2025-09-06T01:24:49.094122037Z" level=info msg="RemoveContainer for \"dc5609d620f1270e0f92af2aabd8c416d06b45921bce40d11c2de8f39973ac6e\" returns successfully" Sep 6 01:24:49.149972 kubelet[2426]: I0906 01:24:49.149933 2426 memory_manager.go:355] "RemoveStaleState removing state" podUID="83f72a04-3ab8-4527-b04c-9761fa268e81" containerName="mount-cgroup" Sep 6 01:24:49.150148 kubelet[2426]: I0906 01:24:49.150135 2426 memory_manager.go:355] "RemoveStaleState removing state" podUID="83f72a04-3ab8-4527-b04c-9761fa268e81" containerName="mount-cgroup" Sep 6 01:24:49.155062 systemd[1]: Created slice kubepods-burstable-pod00afe7cb_7b99_48f1_9f99_5140c4ad71cc.slice. Sep 6 01:24:49.255744 kubelet[2426]: I0906 01:24:49.255707 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-cni-path\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.255954 kubelet[2426]: I0906 01:24:49.255940 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-xtables-lock\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256065 kubelet[2426]: I0906 01:24:49.256050 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-etc-cni-netd\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256157 kubelet[2426]: I0906 01:24:49.256145 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-cilium-ipsec-secrets\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256244 kubelet[2426]: I0906 01:24:49.256233 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-cilium-cgroup\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256387 kubelet[2426]: I0906 01:24:49.256374 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-hubble-tls\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256479 kubelet[2426]: I0906 01:24:49.256466 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-hostproc\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256562 kubelet[2426]: I0906 01:24:49.256551 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-lib-modules\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256640 kubelet[2426]: I0906 01:24:49.256629 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-cilium-config-path\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256722 kubelet[2426]: I0906 01:24:49.256710 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-host-proc-sys-net\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256810 kubelet[2426]: I0906 01:24:49.256799 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-host-proc-sys-kernel\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256896 kubelet[2426]: I0906 01:24:49.256883 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl252\" (UniqueName: \"kubernetes.io/projected/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-kube-api-access-xl252\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.256985 kubelet[2426]: I0906 01:24:49.256972 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-cilium-run\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.257067 kubelet[2426]: I0906 01:24:49.257056 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-bpf-maps\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.257152 kubelet[2426]: I0906 01:24:49.257140 2426 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00afe7cb-7b99-48f1-9f99-5140c4ad71cc-clustermesh-secrets\") pod \"cilium-rsjr9\" (UID: \"00afe7cb-7b99-48f1-9f99-5140c4ad71cc\") " pod="kube-system/cilium-rsjr9" Sep 6 01:24:49.460207 env[1472]: time="2025-09-06T01:24:49.458988294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsjr9,Uid:00afe7cb-7b99-48f1-9f99-5140c4ad71cc,Namespace:kube-system,Attempt:0,}" Sep 6 01:24:49.488321 env[1472]: time="2025-09-06T01:24:49.488209975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:24:49.488448 env[1472]: time="2025-09-06T01:24:49.488330975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:24:49.488448 env[1472]: time="2025-09-06T01:24:49.488358615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:24:49.488525 env[1472]: time="2025-09-06T01:24:49.488490096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014 pid=4340 runtime=io.containerd.runc.v2 Sep 6 01:24:49.501498 systemd[1]: Started cri-containerd-9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014.scope. Sep 6 01:24:49.525979 env[1472]: time="2025-09-06T01:24:49.525800800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rsjr9,Uid:00afe7cb-7b99-48f1-9f99-5140c4ad71cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\"" Sep 6 01:24:49.530117 env[1472]: time="2025-09-06T01:24:49.530086692Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:24:49.550541 kubelet[2426]: I0906 01:24:49.550486 2426 setters.go:602] "Node became not ready" node="ci-3510.3.8-n-dced7724bc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T01:24:49Z","lastTransitionTime":"2025-09-06T01:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 01:24:49.558611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271439243.mount: Deactivated successfully. Sep 6 01:24:49.565631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270721232.mount: Deactivated successfully. Sep 6 01:24:49.581786 env[1472]: time="2025-09-06T01:24:49.581747516Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717\"" Sep 6 01:24:49.582559 env[1472]: time="2025-09-06T01:24:49.582528638Z" level=info msg="StartContainer for \"6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717\"" Sep 6 01:24:49.599023 systemd[1]: Started cri-containerd-6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717.scope. Sep 6 01:24:49.631094 env[1472]: time="2025-09-06T01:24:49.631051053Z" level=info msg="StartContainer for \"6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717\" returns successfully" Sep 6 01:24:49.634783 systemd[1]: cri-containerd-6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717.scope: Deactivated successfully. Sep 6 01:24:49.662980 kubelet[2426]: I0906 01:24:49.662924 2426 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f72a04-3ab8-4527-b04c-9761fa268e81" path="/var/lib/kubelet/pods/83f72a04-3ab8-4527-b04c-9761fa268e81/volumes" Sep 6 01:24:49.687997 env[1472]: time="2025-09-06T01:24:49.687952691Z" level=info msg="shim disconnected" id=6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717 Sep 6 01:24:49.688182 env[1472]: time="2025-09-06T01:24:49.688165572Z" level=warning msg="cleaning up after shim disconnected" id=6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717 namespace=k8s.io Sep 6 01:24:49.688241 env[1472]: time="2025-09-06T01:24:49.688227932Z" level=info msg="cleaning up dead shim" Sep 6 01:24:49.695362 env[1472]: time="2025-09-06T01:24:49.695326632Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4423 runtime=io.containerd.runc.v2\n" Sep 6 01:24:49.942561 kubelet[2426]: W0906 01:24:49.942493 2426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f72a04_3ab8_4527_b04c_9761fa268e81.slice/cri-containerd-023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f.scope WatchSource:0}: container "023933da4a4a9dac491b8f098f31236170f51b3fdb85569969f297708d5d6b0f" in namespace "k8s.io": not found Sep 6 01:24:50.091668 env[1472]: time="2025-09-06T01:24:50.091616333Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:24:50.123844 env[1472]: time="2025-09-06T01:24:50.123772302Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d\"" Sep 6 01:24:50.124402 env[1472]: time="2025-09-06T01:24:50.124336703Z" level=info msg="StartContainer for \"4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d\"" Sep 6 01:24:50.140877 systemd[1]: Started cri-containerd-4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d.scope. Sep 6 01:24:50.171146 systemd[1]: cri-containerd-4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d.scope: Deactivated successfully. Sep 6 01:24:50.174429 env[1472]: time="2025-09-06T01:24:50.174392401Z" level=info msg="StartContainer for \"4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d\" returns successfully" Sep 6 01:24:50.203626 env[1472]: time="2025-09-06T01:24:50.203107800Z" level=info msg="shim disconnected" id=4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d Sep 6 01:24:50.203958 env[1472]: time="2025-09-06T01:24:50.203929082Z" level=warning msg="cleaning up after shim disconnected" id=4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d namespace=k8s.io Sep 6 01:24:50.204060 env[1472]: time="2025-09-06T01:24:50.204046203Z" level=info msg="cleaning up dead shim" Sep 6 01:24:50.211200 env[1472]: time="2025-09-06T01:24:50.211160702Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4481 runtime=io.containerd.runc.v2\n" Sep 6 01:24:50.780572 kubelet[2426]: E0906 01:24:50.780527 2426 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:24:51.095291 env[1472]: time="2025-09-06T01:24:51.094901452Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:24:51.143089 env[1472]: time="2025-09-06T01:24:51.143037263Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262\"" Sep 6 01:24:51.145051 env[1472]: time="2025-09-06T01:24:51.144468347Z" level=info msg="StartContainer for \"2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262\"" Sep 6 01:24:51.163021 systemd[1]: Started cri-containerd-2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262.scope. Sep 6 01:24:51.193313 systemd[1]: cri-containerd-2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262.scope: Deactivated successfully. Sep 6 01:24:51.194711 env[1472]: time="2025-09-06T01:24:51.194670444Z" level=info msg="StartContainer for \"2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262\" returns successfully" Sep 6 01:24:51.228156 env[1472]: time="2025-09-06T01:24:51.228105655Z" level=info msg="shim disconnected" id=2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262 Sep 6 01:24:51.228156 env[1472]: time="2025-09-06T01:24:51.228152775Z" level=warning msg="cleaning up after shim disconnected" id=2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262 namespace=k8s.io Sep 6 01:24:51.228156 env[1472]: time="2025-09-06T01:24:51.228161895Z" level=info msg="cleaning up dead shim" Sep 6 01:24:51.235187 env[1472]: time="2025-09-06T01:24:51.235138274Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4540 runtime=io.containerd.runc.v2\n" Sep 6 01:24:51.545944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262-rootfs.mount: Deactivated successfully. Sep 6 01:24:52.098055 env[1472]: time="2025-09-06T01:24:52.097995939Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:24:52.129544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661238910.mount: Deactivated successfully. Sep 6 01:24:52.135985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929141687.mount: Deactivated successfully. Sep 6 01:24:52.150202 env[1472]: time="2025-09-06T01:24:52.150162479Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9\"" Sep 6 01:24:52.151647 env[1472]: time="2025-09-06T01:24:52.151465323Z" level=info msg="StartContainer for \"d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9\"" Sep 6 01:24:52.164926 systemd[1]: Started cri-containerd-d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9.scope. Sep 6 01:24:52.193600 systemd[1]: cri-containerd-d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9.scope: Deactivated successfully. Sep 6 01:24:52.195072 env[1472]: time="2025-09-06T01:24:52.195007960Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00afe7cb_7b99_48f1_9f99_5140c4ad71cc.slice/cri-containerd-d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9.scope/memory.events\": no such file or directory" Sep 6 01:24:52.199876 env[1472]: time="2025-09-06T01:24:52.199838213Z" level=info msg="StartContainer for \"d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9\" returns successfully" Sep 6 01:24:52.228594 env[1472]: time="2025-09-06T01:24:52.228545730Z" level=info msg="shim disconnected" id=d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9 Sep 6 01:24:52.228594 env[1472]: time="2025-09-06T01:24:52.228589730Z" level=warning msg="cleaning up after shim disconnected" id=d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9 namespace=k8s.io Sep 6 01:24:52.228594 env[1472]: time="2025-09-06T01:24:52.228599490Z" level=info msg="cleaning up dead shim" Sep 6 01:24:52.235069 env[1472]: time="2025-09-06T01:24:52.235015867Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4597 runtime=io.containerd.runc.v2\n" Sep 6 01:24:53.059323 kubelet[2426]: W0906 01:24:53.059249 2426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00afe7cb_7b99_48f1_9f99_5140c4ad71cc.slice/cri-containerd-6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717.scope WatchSource:0}: task 6a15e7222ac9205661a45b12f233750b11d9c23cdc434d9c44d09b3c5e5e0717 not found: not found Sep 6 01:24:53.101606 env[1472]: time="2025-09-06T01:24:53.101563834Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:24:53.129902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622939302.mount: Deactivated successfully. Sep 6 01:24:53.143410 env[1472]: time="2025-09-06T01:24:53.143351266Z" level=info msg="CreateContainer within sandbox \"9ab7e52b1dd7a7ec668490e358fc446ea0f70a8129b72f7fc6b4242ed8b50014\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10\"" Sep 6 01:24:53.144087 env[1472]: time="2025-09-06T01:24:53.144025747Z" level=info msg="StartContainer for \"34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10\"" Sep 6 01:24:53.161266 systemd[1]: Started cri-containerd-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10.scope. Sep 6 01:24:53.196618 env[1472]: time="2025-09-06T01:24:53.196571967Z" level=info msg="StartContainer for \"34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10\" returns successfully" Sep 6 01:24:53.488307 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 01:24:54.126546 kubelet[2426]: I0906 01:24:54.126464 2426 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rsjr9" podStartSLOduration=5.126448755 podStartE2EDuration="5.126448755s" podCreationTimestamp="2025-09-06 01:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:24:54.125970873 +0000 UTC m=+198.548154528" watchObservedRunningTime="2025-09-06 01:24:54.126448755 +0000 UTC m=+198.548632450" Sep 6 01:24:54.368339 systemd[1]: run-containerd-runc-k8s.io-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10-runc.8T4Ooc.mount: Deactivated successfully. Sep 6 01:24:56.138692 systemd-networkd[1623]: lxc_health: Link UP Sep 6 01:24:56.151529 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:24:56.151375 systemd-networkd[1623]: lxc_health: Gained carrier Sep 6 01:24:56.168131 kubelet[2426]: W0906 01:24:56.168077 2426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00afe7cb_7b99_48f1_9f99_5140c4ad71cc.slice/cri-containerd-4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d.scope WatchSource:0}: task 4216f46f695ee9702258714a8e8fcebdfe6be0bed43537e1817cfce9a384082d not found: not found Sep 6 01:24:56.516944 systemd[1]: run-containerd-runc-k8s.io-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10-runc.fyOm03.mount: Deactivated successfully. Sep 6 01:24:57.417435 systemd-networkd[1623]: lxc_health: Gained IPv6LL Sep 6 01:24:58.697435 systemd[1]: run-containerd-runc-k8s.io-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10-runc.Sdin8f.mount: Deactivated successfully. Sep 6 01:24:59.278554 kubelet[2426]: W0906 01:24:59.278389 2426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00afe7cb_7b99_48f1_9f99_5140c4ad71cc.slice/cri-containerd-2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262.scope WatchSource:0}: task 2a7b247a5a76d3fbecb6b5d9dc3ca98e26302d2033d8fd2d21e74dfcf7c38262 not found: not found Sep 6 01:25:00.838501 systemd[1]: run-containerd-runc-k8s.io-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10-runc.Oa59IZ.mount: Deactivated successfully. Sep 6 01:25:02.385120 kubelet[2426]: W0906 01:25:02.385085 2426 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00afe7cb_7b99_48f1_9f99_5140c4ad71cc.slice/cri-containerd-d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9.scope WatchSource:0}: task d0d344deb01e6fe68058b8a752f0e9f633c554f4a665f932bcc991619e3241c9 not found: not found Sep 6 01:25:02.959031 systemd[1]: run-containerd-runc-k8s.io-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10-runc.wxgM7X.mount: Deactivated successfully. Sep 6 01:25:05.073049 systemd[1]: run-containerd-runc-k8s.io-34466c54197144c4c068564608e6c55baf93793271b91673e85769c6bb110c10-runc.tk2Squ.mount: Deactivated successfully. Sep 6 01:25:05.217475 sshd[4283]: pam_unix(sshd:session): session closed for user core Sep 6 01:25:05.220477 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 01:25:05.220702 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Sep 6 01:25:05.221480 systemd[1]: sshd@24-10.200.20.25:22-10.200.16.10:49152.service: Deactivated successfully. Sep 6 01:25:05.222291 systemd-logind[1463]: Removed session 27.