Sep 13 01:32:43.059326 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 01:32:43.059345 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 12 23:05:37 -00 2025 Sep 13 01:32:43.059353 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 01:32:43.059360 kernel: printk: bootconsole [pl11] enabled Sep 13 01:32:43.059365 kernel: efi: EFI v2.70 by EDK II Sep 13 01:32:43.059371 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Sep 13 01:32:43.059377 kernel: random: crng init done Sep 13 01:32:43.059382 kernel: ACPI: Early table checksum verification disabled Sep 13 01:32:43.059388 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 01:32:43.059393 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059399 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059404 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 01:32:43.059411 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059417 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059423 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059429 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059435 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059442 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059448 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 01:32:43.059453 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 01:32:43.059459 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 01:32:43.059465 kernel: NUMA: Failed to initialise from firmware Sep 13 01:32:43.059471 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:43.059476 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Sep 13 01:32:43.059482 kernel: Zone ranges: Sep 13 01:32:43.059488 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 01:32:43.059493 kernel: DMA32 empty Sep 13 01:32:43.059499 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:43.059505 kernel: Movable zone start for each node Sep 13 01:32:43.059511 kernel: Early memory node ranges Sep 13 01:32:43.059517 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 01:32:43.059522 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 01:32:43.059528 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 01:32:43.059534 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 01:32:43.059539 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 01:32:43.059545 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 01:32:43.059550 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 01:32:43.059556 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 01:32:43.059562 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 01:32:43.059568 kernel: psci: probing for conduit method from ACPI. Sep 13 01:32:43.059577 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 01:32:43.059583 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 01:32:43.059589 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 01:32:43.059595 kernel: psci: SMC Calling Convention v1.4 Sep 13 01:32:43.059601 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Sep 13 01:32:43.059608 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Sep 13 01:32:43.059614 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Sep 13 01:32:43.059620 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Sep 13 01:32:43.059626 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 01:32:43.059633 kernel: Detected PIPT I-cache on CPU0 Sep 13 01:32:43.059639 kernel: CPU features: detected: GIC system register CPU interface Sep 13 01:32:43.059645 kernel: CPU features: detected: Hardware dirty bit management Sep 13 01:32:43.059651 kernel: CPU features: detected: Spectre-BHB Sep 13 01:32:43.059657 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 01:32:43.059663 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 01:32:43.059669 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 01:32:43.059676 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 01:32:43.059682 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 01:32:43.059688 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 01:32:43.059694 kernel: Policy zone: Normal Sep 13 01:32:43.059702 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:43.059708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:32:43.059715 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 01:32:43.059721 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:32:43.059727 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:32:43.059733 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Sep 13 01:32:43.059739 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Sep 13 01:32:43.059746 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 01:32:43.059752 kernel: trace event string verifier disabled Sep 13 01:32:43.059759 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:32:43.059765 kernel: rcu: RCU event tracing is enabled. Sep 13 01:32:43.059772 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 01:32:43.059778 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:32:43.059784 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:32:43.059790 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:32:43.059796 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 01:32:43.059802 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 01:32:43.059808 kernel: GICv3: 960 SPIs implemented Sep 13 01:32:43.059815 kernel: GICv3: 0 Extended SPIs implemented Sep 13 01:32:43.059821 kernel: GICv3: Distributor has no Range Selector support Sep 13 01:32:43.059827 kernel: Root IRQ handler: gic_handle_irq Sep 13 01:32:43.059834 kernel: GICv3: 16 PPIs implemented Sep 13 01:32:43.059840 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 01:32:43.059846 kernel: ITS: No ITS available, not enabling LPIs Sep 13 01:32:43.059852 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:43.059858 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 01:32:43.059864 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 01:32:43.059871 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 01:32:43.059877 kernel: Console: colour dummy device 80x25 Sep 13 01:32:43.059885 kernel: printk: console [tty1] enabled Sep 13 01:32:43.059891 kernel: ACPI: Core revision 20210730 Sep 13 01:32:43.059897 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 01:32:43.059904 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:32:43.059910 kernel: LSM: Security Framework initializing Sep 13 01:32:43.059916 kernel: SELinux: Initializing. Sep 13 01:32:43.059922 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:43.059929 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 01:32:43.059935 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 01:32:43.059957 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 01:32:43.059964 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:32:43.059970 kernel: Remapping and enabling EFI services. Sep 13 01:32:43.059976 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:32:43.059982 kernel: Detected PIPT I-cache on CPU1 Sep 13 01:32:43.059989 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 01:32:43.059995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 01:32:43.060001 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 01:32:43.060007 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:32:43.060014 kernel: SMP: Total of 2 processors activated. Sep 13 01:32:43.060021 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 01:32:43.060028 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 01:32:43.060034 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 01:32:43.060040 kernel: CPU features: detected: CRC32 instructions Sep 13 01:32:43.060047 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 01:32:43.060053 kernel: CPU features: detected: LSE atomic instructions Sep 13 01:32:43.060059 kernel: CPU features: detected: Privileged Access Never Sep 13 01:32:43.060065 kernel: CPU: All CPU(s) started at EL1 Sep 13 01:32:43.060072 kernel: alternatives: patching kernel code Sep 13 01:32:43.060079 kernel: devtmpfs: initialized Sep 13 01:32:43.060089 kernel: KASLR enabled Sep 13 01:32:43.060096 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:32:43.060104 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 01:32:43.060110 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:32:43.060117 kernel: SMBIOS 3.1.0 present. Sep 13 01:32:43.060123 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 01:32:43.060130 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:32:43.060137 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 01:32:43.060144 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 01:32:43.060151 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 01:32:43.060158 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:32:43.060164 kernel: audit: type=2000 audit(0.096:1): state=initialized audit_enabled=0 res=1 Sep 13 01:32:43.060171 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:32:43.060177 kernel: cpuidle: using governor menu Sep 13 01:32:43.060184 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 01:32:43.060192 kernel: ASID allocator initialised with 32768 entries Sep 13 01:32:43.060198 kernel: ACPI: bus type PCI registered Sep 13 01:32:43.060205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:32:43.060212 kernel: Serial: AMBA PL011 UART driver Sep 13 01:32:43.060218 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:32:43.060225 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 01:32:43.060231 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:32:43.060238 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 01:32:43.060244 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:32:43.060252 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 01:32:43.060258 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:32:43.060265 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:32:43.060271 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:32:43.060278 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 01:32:43.060284 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 01:32:43.060291 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 01:32:43.060298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:32:43.060305 kernel: ACPI: Interpreter enabled Sep 13 01:32:43.060312 kernel: ACPI: Using GIC for interrupt routing Sep 13 01:32:43.060319 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 01:32:43.060325 kernel: printk: console [ttyAMA0] enabled Sep 13 01:32:43.060332 kernel: printk: bootconsole [pl11] disabled Sep 13 01:32:43.060339 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 01:32:43.060345 kernel: iommu: Default domain type: Translated Sep 13 01:32:43.060352 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 01:32:43.060359 kernel: vgaarb: loaded Sep 13 01:32:43.060365 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 01:32:43.060372 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 01:32:43.060380 kernel: PTP clock support registered Sep 13 01:32:43.060386 kernel: Registered efivars operations Sep 13 01:32:43.060392 kernel: No ACPI PMU IRQ for CPU0 Sep 13 01:32:43.060399 kernel: No ACPI PMU IRQ for CPU1 Sep 13 01:32:43.060405 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 01:32:43.060412 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:32:43.060419 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:32:43.060425 kernel: pnp: PnP ACPI init Sep 13 01:32:43.060432 kernel: pnp: PnP ACPI: found 0 devices Sep 13 01:32:43.060439 kernel: NET: Registered PF_INET protocol family Sep 13 01:32:43.060446 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 01:32:43.060452 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 01:32:43.060459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:32:43.060466 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:32:43.060473 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 13 01:32:43.060479 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 01:32:43.060486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:43.060494 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 01:32:43.060501 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:32:43.060508 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:32:43.060514 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 01:32:43.060521 kernel: kvm [1]: HYP mode not available Sep 13 01:32:43.060528 kernel: Initialise system trusted keyrings Sep 13 01:32:43.060534 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 01:32:43.060541 kernel: Key type asymmetric registered Sep 13 01:32:43.060547 kernel: Asymmetric key parser 'x509' registered Sep 13 01:32:43.060555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 01:32:43.060562 kernel: io scheduler mq-deadline registered Sep 13 01:32:43.060569 kernel: io scheduler kyber registered Sep 13 01:32:43.060575 kernel: io scheduler bfq registered Sep 13 01:32:43.060582 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:32:43.060589 kernel: thunder_xcv, ver 1.0 Sep 13 01:32:43.060595 kernel: thunder_bgx, ver 1.0 Sep 13 01:32:43.060601 kernel: nicpf, ver 1.0 Sep 13 01:32:43.060608 kernel: nicvf, ver 1.0 Sep 13 01:32:43.060733 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 01:32:43.060797 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T01:32:42 UTC (1757727162) Sep 13 01:32:43.060806 kernel: efifb: probing for efifb Sep 13 01:32:43.060813 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 01:32:43.060819 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 01:32:43.060826 kernel: efifb: scrolling: redraw Sep 13 01:32:43.060832 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 01:32:43.060839 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:32:43.060847 kernel: fb0: EFI VGA frame buffer device Sep 13 01:32:43.060854 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 01:32:43.060861 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:32:43.060867 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:32:43.060874 kernel: Segment Routing with IPv6 Sep 13 01:32:43.060880 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:32:43.060887 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:32:43.060894 kernel: Key type dns_resolver registered Sep 13 01:32:43.060900 kernel: registered taskstats version 1 Sep 13 01:32:43.060907 kernel: Loading compiled-in X.509 certificates Sep 13 01:32:43.060915 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 47ac98e9306f36eebe4291d409359a5a5d0c2b9c' Sep 13 01:32:43.060922 kernel: Key type .fscrypt registered Sep 13 01:32:43.060928 kernel: Key type fscrypt-provisioning registered Sep 13 01:32:43.060935 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:32:43.060953 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:32:43.060960 kernel: ima: No architecture policies found Sep 13 01:32:43.060966 kernel: clk: Disabling unused clocks Sep 13 01:32:43.060973 kernel: Freeing unused kernel memory: 36416K Sep 13 01:32:43.060981 kernel: Run /init as init process Sep 13 01:32:43.060988 kernel: with arguments: Sep 13 01:32:43.060994 kernel: /init Sep 13 01:32:43.061001 kernel: with environment: Sep 13 01:32:43.061007 kernel: HOME=/ Sep 13 01:32:43.061014 kernel: TERM=linux Sep 13 01:32:43.061020 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:32:43.061029 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:32:43.061039 systemd[1]: Detected virtualization microsoft. Sep 13 01:32:43.061047 systemd[1]: Detected architecture arm64. Sep 13 01:32:43.061053 systemd[1]: Running in initrd. Sep 13 01:32:43.061061 systemd[1]: No hostname configured, using default hostname. Sep 13 01:32:43.061067 systemd[1]: Hostname set to . Sep 13 01:32:43.061075 systemd[1]: Initializing machine ID from random generator. Sep 13 01:32:43.061082 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:32:43.061089 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:32:43.061097 systemd[1]: Reached target cryptsetup.target. Sep 13 01:32:43.061104 systemd[1]: Reached target paths.target. Sep 13 01:32:43.061111 systemd[1]: Reached target slices.target. Sep 13 01:32:43.061118 systemd[1]: Reached target swap.target. Sep 13 01:32:43.061125 systemd[1]: Reached target timers.target. Sep 13 01:32:43.061132 systemd[1]: Listening on iscsid.socket. Sep 13 01:32:43.061139 systemd[1]: Listening on iscsiuio.socket. Sep 13 01:32:43.061146 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 01:32:43.061154 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 01:32:43.061162 systemd[1]: Listening on systemd-journald.socket. Sep 13 01:32:43.061169 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:32:43.061176 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:32:43.061183 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:32:43.061190 systemd[1]: Reached target sockets.target. Sep 13 01:32:43.061197 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:32:43.061204 systemd[1]: Finished network-cleanup.service. Sep 13 01:32:43.061211 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:32:43.061220 systemd[1]: Starting systemd-journald.service... Sep 13 01:32:43.061227 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:32:43.061234 systemd[1]: Starting systemd-resolved.service... Sep 13 01:32:43.061245 systemd-journald[276]: Journal started Sep 13 01:32:43.061285 systemd-journald[276]: Runtime Journal (/run/log/journal/b31145d9b7e445cf9e66b52fddbd2636) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:32:43.051360 systemd-modules-load[277]: Inserted module 'overlay' Sep 13 01:32:43.084328 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 01:32:43.090656 systemd-resolved[278]: Positive Trust Anchors: Sep 13 01:32:43.108032 systemd[1]: Started systemd-journald.service. Sep 13 01:32:43.108057 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:32:43.090673 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:32:43.152781 kernel: Bridge firewalling registered Sep 13 01:32:43.152804 kernel: audit: type=1130 audit(1757727163.132:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.090700 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:32:43.101370 systemd-resolved[278]: Defaulting to hostname 'linux'. Sep 13 01:32:43.217907 kernel: audit: type=1130 audit(1757727163.193:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.134301 systemd-modules-load[277]: Inserted module 'br_netfilter' Sep 13 01:32:43.248915 kernel: audit: type=1130 audit(1757727163.222:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.248935 kernel: SCSI subsystem initialized Sep 13 01:32:43.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.188677 systemd[1]: Started systemd-resolved.service. Sep 13 01:32:43.276680 kernel: audit: type=1130 audit(1757727163.253:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.193832 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:32:43.326030 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:32:43.326062 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:32:43.326071 kernel: audit: type=1130 audit(1757727163.296:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.326081 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 01:32:43.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.223154 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:32:43.254340 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 01:32:43.296369 systemd[1]: Reached target nss-lookup.target. Sep 13 01:32:43.331220 systemd-modules-load[277]: Inserted module 'dm_multipath' Sep 13 01:32:43.341395 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 01:32:43.347513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 01:32:43.406446 kernel: audit: type=1130 audit(1757727163.376:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.365226 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:32:43.377671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 01:32:43.442239 kernel: audit: type=1130 audit(1757727163.406:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.408086 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:32:43.436741 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 01:32:43.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.453265 systemd[1]: Starting dracut-cmdline.service... Sep 13 01:32:43.479470 kernel: audit: type=1130 audit(1757727163.452:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.479493 dracut-cmdline[299]: dracut-dracut-053 Sep 13 01:32:43.479493 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Sep 13 01:32:43.479493 dracut-cmdline[299]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=563df7b8a9b19b8c496587ae06f3c3ec1604a5105c3a3f313c9ccaa21d8055ca Sep 13 01:32:43.554928 kernel: audit: type=1130 audit(1757727163.490:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.485708 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:32:43.576963 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:32:43.592020 kernel: iscsi: registered transport (tcp) Sep 13 01:32:43.618271 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:32:43.618309 kernel: QLogic iSCSI HBA Driver Sep 13 01:32:43.647575 systemd[1]: Finished dracut-cmdline.service. Sep 13 01:32:43.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:43.653377 systemd[1]: Starting dracut-pre-udev.service... Sep 13 01:32:43.712965 kernel: raid6: neonx8 gen() 13760 MB/s Sep 13 01:32:43.730951 kernel: raid6: neonx8 xor() 10831 MB/s Sep 13 01:32:43.751952 kernel: raid6: neonx4 gen() 13497 MB/s Sep 13 01:32:43.773971 kernel: raid6: neonx4 xor() 11077 MB/s Sep 13 01:32:43.794951 kernel: raid6: neonx2 gen() 12977 MB/s Sep 13 01:32:43.816951 kernel: raid6: neonx2 xor() 10244 MB/s Sep 13 01:32:43.837951 kernel: raid6: neonx1 gen() 10637 MB/s Sep 13 01:32:43.858982 kernel: raid6: neonx1 xor() 8792 MB/s Sep 13 01:32:43.880981 kernel: raid6: int64x8 gen() 6263 MB/s Sep 13 01:32:43.901967 kernel: raid6: int64x8 xor() 3542 MB/s Sep 13 01:32:43.922979 kernel: raid6: int64x4 gen() 7224 MB/s Sep 13 01:32:43.944976 kernel: raid6: int64x4 xor() 3858 MB/s Sep 13 01:32:43.965976 kernel: raid6: int64x2 gen() 6155 MB/s Sep 13 01:32:43.986971 kernel: raid6: int64x2 xor() 3322 MB/s Sep 13 01:32:44.008975 kernel: raid6: int64x1 gen() 5046 MB/s Sep 13 01:32:44.034029 kernel: raid6: int64x1 xor() 2647 MB/s Sep 13 01:32:44.034085 kernel: raid6: using algorithm neonx8 gen() 13760 MB/s Sep 13 01:32:44.034095 kernel: raid6: .... xor() 10831 MB/s, rmw enabled Sep 13 01:32:44.038808 kernel: raid6: using neon recovery algorithm Sep 13 01:32:44.057966 kernel: xor: measuring software checksum speed Sep 13 01:32:44.066109 kernel: 8regs : 16274 MB/sec Sep 13 01:32:44.066170 kernel: 32regs : 20676 MB/sec Sep 13 01:32:44.070338 kernel: arm64_neon : 27747 MB/sec Sep 13 01:32:44.070369 kernel: xor: using function: arm64_neon (27747 MB/sec) Sep 13 01:32:44.132992 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 13 01:32:44.142205 systemd[1]: Finished dracut-pre-udev.service. Sep 13 01:32:44.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:44.150000 audit: BPF prog-id=7 op=LOAD Sep 13 01:32:44.150000 audit: BPF prog-id=8 op=LOAD Sep 13 01:32:44.152141 systemd[1]: Starting systemd-udevd.service... Sep 13 01:32:44.171825 systemd-udevd[475]: Using default interface naming scheme 'v252'. Sep 13 01:32:44.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:44.177998 systemd[1]: Started systemd-udevd.service. Sep 13 01:32:44.184966 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 01:32:44.202296 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Sep 13 01:32:44.229505 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 01:32:44.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:44.235873 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:32:44.272289 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:32:44.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:44.332152 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 01:32:44.354976 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 01:32:44.355026 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 01:32:44.355036 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 01:32:44.363103 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 13 01:32:44.363965 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 13 01:32:44.387969 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 01:32:44.388128 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 01:32:44.408795 kernel: scsi host0: storvsc_host_t Sep 13 01:32:44.409006 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 01:32:44.409032 kernel: scsi host1: storvsc_host_t Sep 13 01:32:44.409047 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 01:32:44.437949 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 01:32:44.450878 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 01:32:44.450894 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 01:32:44.488017 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 01:32:44.488127 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 01:32:44.488208 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 01:32:44.488285 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 01:32:44.488368 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 01:32:44.488443 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:44.488454 kernel: hv_netvsc 000d3a06-d8a6-000d-3a06-d8a6000d3a06 eth0: VF slot 1 added Sep 13 01:32:44.488536 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 01:32:44.497965 kernel: hv_vmbus: registering driver hv_pci Sep 13 01:32:44.505967 kernel: hv_pci 2190283e-4566-4580-a8d4-522625b157f9: PCI VMBus probing: Using version 0x10004 Sep 13 01:32:44.600257 kernel: hv_pci 2190283e-4566-4580-a8d4-522625b157f9: PCI host bridge to bus 4566:00 Sep 13 01:32:44.600371 kernel: pci_bus 4566:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 01:32:44.600462 kernel: pci_bus 4566:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 01:32:44.600543 kernel: pci 4566:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 01:32:44.600645 kernel: pci 4566:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:44.600732 kernel: pci 4566:00:02.0: enabling Extended Tags Sep 13 01:32:44.600815 kernel: pci 4566:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4566:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 01:32:44.600891 kernel: pci_bus 4566:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 01:32:44.600983 kernel: pci 4566:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 01:32:44.639326 kernel: mlx5_core 4566:00:02.0: enabling device (0000 -> 0002) Sep 13 01:32:44.875211 kernel: mlx5_core 4566:00:02.0: firmware version: 16.30.1284 Sep 13 01:32:44.875377 kernel: mlx5_core 4566:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Sep 13 01:32:44.875462 kernel: hv_netvsc 000d3a06-d8a6-000d-3a06-d8a6000d3a06 eth0: VF registering: eth1 Sep 13 01:32:44.875544 kernel: mlx5_core 4566:00:02.0 eth1: joined to eth0 Sep 13 01:32:44.884969 kernel: mlx5_core 4566:00:02.0 enP17766s1: renamed from eth1 Sep 13 01:32:45.007969 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (534) Sep 13 01:32:45.012141 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 01:32:45.029339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:32:45.208241 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 01:32:45.215716 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 01:32:45.236630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 01:32:45.246059 systemd[1]: Starting disk-uuid.service... Sep 13 01:32:46.279706 disk-uuid[600]: The operation has completed successfully. Sep 13 01:32:46.285024 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 01:32:46.348885 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:32:46.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:46.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:46.348985 systemd[1]: Finished disk-uuid.service. Sep 13 01:32:46.358706 systemd[1]: Starting verity-setup.service... Sep 13 01:32:46.402982 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 01:32:46.744831 systemd[1]: Found device dev-mapper-usr.device. Sep 13 01:32:46.751062 systemd[1]: Mounting sysusr-usr.mount... Sep 13 01:32:46.762183 systemd[1]: Finished verity-setup.service. Sep 13 01:32:46.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:46.826965 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 01:32:46.827876 systemd[1]: Mounted sysusr-usr.mount. Sep 13 01:32:46.832188 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 01:32:46.832990 systemd[1]: Starting ignition-setup.service... Sep 13 01:32:46.841458 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 01:32:46.891408 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:32:46.891470 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:32:46.896519 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:32:46.939070 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 01:32:46.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:46.948000 audit: BPF prog-id=9 op=LOAD Sep 13 01:32:46.950356 systemd[1]: Starting systemd-networkd.service... Sep 13 01:32:46.977427 systemd-networkd[870]: lo: Link UP Sep 13 01:32:46.977439 systemd-networkd[870]: lo: Gained carrier Sep 13 01:32:46.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:46.977875 systemd-networkd[870]: Enumeration completed Sep 13 01:32:47.024582 kernel: kauditd_printk_skb: 12 callbacks suppressed Sep 13 01:32:47.024610 kernel: audit: type=1130 audit(1757727166.986:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:46.978585 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:32:46.981675 systemd[1]: Started systemd-networkd.service. Sep 13 01:32:46.996078 systemd[1]: Reached target network.target. Sep 13 01:32:47.074079 kernel: audit: type=1130 audit(1757727167.043:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.024063 systemd[1]: Starting iscsiuio.service... Sep 13 01:32:47.082780 iscsid[878]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:32:47.082780 iscsid[878]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 01:32:47.082780 iscsid[878]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 01:32:47.082780 iscsid[878]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 01:32:47.082780 iscsid[878]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 01:32:47.082780 iscsid[878]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 01:32:47.082780 iscsid[878]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 01:32:47.244197 kernel: audit: type=1130 audit(1757727167.086:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.244223 kernel: mlx5_core 4566:00:02.0 enP17766s1: Link up Sep 13 01:32:47.244393 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:32:47.244405 kernel: audit: type=1130 audit(1757727167.192:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.244414 kernel: hv_netvsc 000d3a06-d8a6-000d-3a06-d8a6000d3a06 eth0: Data path switched to VF: enP17766s1 Sep 13 01:32:47.244522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 01:32:47.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.037034 systemd[1]: Started iscsiuio.service. Sep 13 01:32:47.048509 systemd[1]: Starting iscsid.service... Sep 13 01:32:47.079399 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:32:47.079756 systemd[1]: Started iscsid.service. Sep 13 01:32:47.094995 systemd[1]: Starting dracut-initqueue.service... Sep 13 01:32:47.179358 systemd[1]: Finished dracut-initqueue.service. Sep 13 01:32:47.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.192801 systemd[1]: Reached target remote-fs-pre.target. Sep 13 01:32:47.318731 kernel: audit: type=1130 audit(1757727167.291:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.241075 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:32:47.249869 systemd-networkd[870]: enP17766s1: Link UP Sep 13 01:32:47.250074 systemd-networkd[870]: eth0: Link UP Sep 13 01:32:47.250497 systemd-networkd[870]: eth0: Gained carrier Sep 13 01:32:47.250959 systemd[1]: Reached target remote-fs.target. Sep 13 01:32:47.259870 systemd[1]: Starting dracut-pre-mount.service... Sep 13 01:32:47.267063 systemd-networkd[870]: enP17766s1: Gained carrier Sep 13 01:32:47.280138 systemd[1]: Finished dracut-pre-mount.service. Sep 13 01:32:47.281362 systemd-networkd[870]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:32:47.473601 systemd[1]: Finished ignition-setup.service. Sep 13 01:32:47.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:47.500615 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 01:32:47.509818 kernel: audit: type=1130 audit(1757727167.478:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:49.014074 systemd-networkd[870]: eth0: Gained IPv6LL Sep 13 01:32:50.481120 ignition[897]: Ignition 2.14.0 Sep 13 01:32:50.481133 ignition[897]: Stage: fetch-offline Sep 13 01:32:50.481188 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:50.481211 ignition[897]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:50.580715 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:50.580861 ignition[897]: parsed url from cmdline: "" Sep 13 01:32:50.580865 ignition[897]: no config URL provided Sep 13 01:32:50.580870 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:32:50.624898 kernel: audit: type=1130 audit(1757727170.596:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.591182 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 01:32:50.580877 ignition[897]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:32:50.598474 systemd[1]: Starting ignition-fetch.service... Sep 13 01:32:50.580882 ignition[897]: failed to fetch config: resource requires networking Sep 13 01:32:50.581161 ignition[897]: Ignition finished successfully Sep 13 01:32:50.610635 ignition[903]: Ignition 2.14.0 Sep 13 01:32:50.610641 ignition[903]: Stage: fetch Sep 13 01:32:50.610757 ignition[903]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:50.610778 ignition[903]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:50.613490 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:50.613617 ignition[903]: parsed url from cmdline: "" Sep 13 01:32:50.613620 ignition[903]: no config URL provided Sep 13 01:32:50.613625 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:32:50.613632 ignition[903]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:32:50.613659 ignition[903]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 01:32:50.763290 ignition[903]: GET result: OK Sep 13 01:32:50.763368 ignition[903]: config has been read from IMDS userdata Sep 13 01:32:50.766584 unknown[903]: fetched base config from "system" Sep 13 01:32:50.803028 kernel: audit: type=1130 audit(1757727170.777:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.763407 ignition[903]: parsing config with SHA512: 557037f6553b910585107b20702433c27ac4ca3f38c68c7d721efddccd77b185729e3006a3753f3bb7e79e7c378ed6b0d99448c756e41f58be540c43cfb5f0f7 Sep 13 01:32:50.766592 unknown[903]: fetched base config from "system" Sep 13 01:32:50.767307 ignition[903]: fetch: fetch complete Sep 13 01:32:50.766597 unknown[903]: fetched user config from "azure" Sep 13 01:32:50.767324 ignition[903]: fetch: fetch passed Sep 13 01:32:50.773334 systemd[1]: Finished ignition-fetch.service. Sep 13 01:32:50.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.767384 ignition[903]: Ignition finished successfully Sep 13 01:32:50.857378 kernel: audit: type=1130 audit(1757727170.824:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.778917 systemd[1]: Starting ignition-kargs.service... Sep 13 01:32:50.885047 kernel: audit: type=1130 audit(1757727170.866:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.809619 ignition[909]: Ignition 2.14.0 Sep 13 01:32:50.820257 systemd[1]: Finished ignition-kargs.service. Sep 13 01:32:50.809626 ignition[909]: Stage: kargs Sep 13 01:32:50.848399 systemd[1]: Starting ignition-disks.service... Sep 13 01:32:50.809737 ignition[909]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:50.862200 systemd[1]: Finished ignition-disks.service. Sep 13 01:32:50.809755 ignition[909]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:50.867199 systemd[1]: Reached target initrd-root-device.target. Sep 13 01:32:50.812874 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:50.890264 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:32:50.815981 ignition[909]: kargs: kargs passed Sep 13 01:32:50.899896 systemd[1]: Reached target local-fs.target. Sep 13 01:32:50.816362 ignition[909]: Ignition finished successfully Sep 13 01:32:50.909194 systemd[1]: Reached target sysinit.target. Sep 13 01:32:50.855702 ignition[915]: Ignition 2.14.0 Sep 13 01:32:50.916932 systemd[1]: Reached target basic.target. Sep 13 01:32:50.855709 ignition[915]: Stage: disks Sep 13 01:32:50.927298 systemd[1]: Starting systemd-fsck-root.service... Sep 13 01:32:50.855832 ignition[915]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:50.855851 ignition[915]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:51.002680 systemd-fsck[923]: ROOT: clean, 629/7326000 files, 481083/7359488 blocks Sep 13 01:32:50.858859 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:50.861074 ignition[915]: disks: disks passed Sep 13 01:32:51.017697 systemd[1]: Finished systemd-fsck-root.service. Sep 13 01:32:51.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:50.861130 ignition[915]: Ignition finished successfully Sep 13 01:32:51.027509 systemd[1]: Mounting sysroot.mount... Sep 13 01:32:51.058961 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 01:32:51.059132 systemd[1]: Mounted sysroot.mount. Sep 13 01:32:51.063184 systemd[1]: Reached target initrd-root-fs.target. Sep 13 01:32:51.095913 systemd[1]: Mounting sysroot-usr.mount... Sep 13 01:32:51.100790 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 01:32:51.109412 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:32:51.109454 systemd[1]: Reached target ignition-diskful.target. Sep 13 01:32:51.121603 systemd[1]: Mounted sysroot-usr.mount. Sep 13 01:32:51.175397 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:32:51.181377 systemd[1]: Starting initrd-setup-root.service... Sep 13 01:32:51.211964 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Sep 13 01:32:51.212185 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:32:51.232584 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:32:51.232710 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:32:51.237843 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:32:51.249503 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:32:51.260178 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:32:51.282523 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:32:51.306194 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:32:51.848957 systemd[1]: Finished initrd-setup-root.service. Sep 13 01:32:51.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:51.855128 systemd[1]: Starting ignition-mount.service... Sep 13 01:32:51.869593 systemd[1]: Starting sysroot-boot.service... Sep 13 01:32:51.879333 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 01:32:51.879490 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 01:32:51.912795 systemd[1]: Finished sysroot-boot.service. Sep 13 01:32:51.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:51.929767 ignition[1002]: INFO : Ignition 2.14.0 Sep 13 01:32:51.929767 ignition[1002]: INFO : Stage: mount Sep 13 01:32:51.945022 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:51.945022 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:51.945022 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:51.945022 ignition[1002]: INFO : mount: mount passed Sep 13 01:32:51.945022 ignition[1002]: INFO : Ignition finished successfully Sep 13 01:32:51.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:51.942378 systemd[1]: Finished ignition-mount.service. Sep 13 01:32:52.834741 coreos-metadata[933]: Sep 13 01:32:52.834 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 01:32:52.843428 coreos-metadata[933]: Sep 13 01:32:52.838 INFO Fetch successful Sep 13 01:32:52.876114 coreos-metadata[933]: Sep 13 01:32:52.876 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 01:32:52.899289 coreos-metadata[933]: Sep 13 01:32:52.899 INFO Fetch successful Sep 13 01:32:52.915010 coreos-metadata[933]: Sep 13 01:32:52.914 INFO wrote hostname ci-3510.3.8-n-8e33b0f951 to /sysroot/etc/hostname Sep 13 01:32:52.924588 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 01:32:52.956624 kernel: kauditd_printk_skb: 4 callbacks suppressed Sep 13 01:32:52.956647 kernel: audit: type=1130 audit(1757727172.930:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:52.931368 systemd[1]: Starting ignition-files.service... Sep 13 01:32:52.966452 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 01:32:52.995962 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1012) Sep 13 01:32:53.009386 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 01:32:53.009422 kernel: BTRFS info (device sda6): using free space tree Sep 13 01:32:53.014224 kernel: BTRFS info (device sda6): has skinny extents Sep 13 01:32:53.022312 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 01:32:53.040604 ignition[1031]: INFO : Ignition 2.14.0 Sep 13 01:32:53.040604 ignition[1031]: INFO : Stage: files Sep 13 01:32:53.052151 ignition[1031]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:53.052151 ignition[1031]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:53.052151 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:53.052151 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:32:53.086879 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:32:53.086879 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:32:53.148887 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:32:53.157011 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:32:53.172936 unknown[1031]: wrote ssh authorized keys file for user: core Sep 13 01:32:53.178926 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:32:53.187398 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 13 01:32:53.187398 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 13 01:32:53.245825 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:32:53.328569 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 13 01:32:53.348824 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:32:53.359280 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 01:32:53.529679 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:32:53.605864 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1152960807" Sep 13 01:32:53.714123 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1152960807": device or resource busy Sep 13 01:32:53.714123 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1152960807", trying btrfs: device or resource busy Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1152960807" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1152960807" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1152960807" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1152960807" Sep 13 01:32:53.714123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Sep 13 01:32:53.667667 systemd[1]: mnt-oem1152960807.mount: Deactivated successfully. Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2287060886" Sep 13 01:32:53.897329 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2287060886": device or resource busy Sep 13 01:32:53.897329 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2287060886", trying btrfs: device or resource busy Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2287060886" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2287060886" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2287060886" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2287060886" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 13 01:32:53.897329 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 13 01:32:54.195710 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Sep 13 01:32:54.443016 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 13 01:32:54.443016 ignition[1031]: INFO : files: op(14): [started] processing unit "waagent.service" Sep 13 01:32:54.443016 ignition[1031]: INFO : files: op(14): [finished] processing unit "waagent.service" Sep 13 01:32:54.443016 ignition[1031]: INFO : files: op(15): [started] processing unit "nvidia.service" Sep 13 01:32:54.443016 ignition[1031]: INFO : files: op(15): [finished] processing unit "nvidia.service" Sep 13 01:32:54.443016 ignition[1031]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Sep 13 01:32:54.532779 kernel: audit: type=1130 audit(1757727174.472:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.465245 systemd[1]: Finished ignition-files.service. Sep 13 01:32:54.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:32:54.559457 ignition[1031]: INFO : files: files passed Sep 13 01:32:54.559457 ignition[1031]: INFO : Ignition finished successfully Sep 13 01:32:54.811800 kernel: audit: type=1130 audit(1757727174.537:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.811826 kernel: audit: type=1131 audit(1757727174.558:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.811837 kernel: audit: type=1130 audit(1757727174.603:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.811847 kernel: audit: type=1130 audit(1757727174.672:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.811856 kernel: audit: type=1131 audit(1757727174.672:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.811871 kernel: audit: type=1130 audit(1757727174.782:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.499238 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 01:32:54.817872 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:32:54.505779 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 01:32:54.511886 systemd[1]: Starting ignition-quench.service... Sep 13 01:32:54.524921 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:32:54.525056 systemd[1]: Finished ignition-quench.service. Sep 13 01:32:54.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.574271 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 01:32:54.603585 systemd[1]: Reached target ignition-complete.target. Sep 13 01:32:54.638136 systemd[1]: Starting initrd-parse-etc.service... Sep 13 01:32:54.660427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:32:54.931919 kernel: audit: type=1131 audit(1757727174.872:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.660528 systemd[1]: Finished initrd-parse-etc.service. Sep 13 01:32:54.673465 systemd[1]: Reached target initrd-fs.target. Sep 13 01:32:54.721052 systemd[1]: Reached target initrd.target. Sep 13 01:32:54.733986 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 01:32:54.742153 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 01:32:54.767660 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 01:32:54.811292 systemd[1]: Starting initrd-cleanup.service... Sep 13 01:32:54.830186 systemd[1]: Stopped target nss-lookup.target. Sep 13 01:32:54.836496 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 01:32:55.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.853099 systemd[1]: Stopped target timers.target. Sep 13 01:32:54.862211 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:32:55.060208 kernel: audit: type=1131 audit(1757727175.018:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.862272 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 01:32:55.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.895452 systemd[1]: Stopped target initrd.target. Sep 13 01:32:55.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.905924 systemd[1]: Stopped target basic.target. Sep 13 01:32:55.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.915395 systemd[1]: Stopped target ignition-complete.target. Sep 13 01:32:55.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.926022 systemd[1]: Stopped target ignition-diskful.target. Sep 13 01:32:54.937828 systemd[1]: Stopped target initrd-root-device.target. Sep 13 01:32:54.947651 systemd[1]: Stopped target remote-fs.target. Sep 13 01:32:55.121082 ignition[1069]: INFO : Ignition 2.14.0 Sep 13 01:32:55.121082 ignition[1069]: INFO : Stage: umount Sep 13 01:32:55.121082 ignition[1069]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 01:32:55.121082 ignition[1069]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Sep 13 01:32:55.121082 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 01:32:55.121082 ignition[1069]: INFO : umount: umount passed Sep 13 01:32:55.121082 ignition[1069]: INFO : Ignition finished successfully Sep 13 01:32:55.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.957437 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 01:32:55.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.222701 iscsid[878]: iscsid shutting down. Sep 13 01:32:55.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.969291 systemd[1]: Stopped target sysinit.target. Sep 13 01:32:55.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.982017 systemd[1]: Stopped target local-fs.target. Sep 13 01:32:55.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:54.991520 systemd[1]: Stopped target local-fs-pre.target. Sep 13 01:32:55.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.001198 systemd[1]: Stopped target swap.target. Sep 13 01:32:55.009331 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:32:55.009401 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 01:32:55.043749 systemd[1]: Stopped target cryptsetup.target. Sep 13 01:32:55.054357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:32:55.054420 systemd[1]: Stopped dracut-initqueue.service. Sep 13 01:32:55.065166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:32:55.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.065211 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 01:32:55.074845 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:32:55.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.074883 systemd[1]: Stopped ignition-files.service. Sep 13 01:32:55.083519 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 01:32:55.083557 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 01:32:55.097871 systemd[1]: Stopping ignition-mount.service... Sep 13 01:32:55.115912 systemd[1]: Stopping iscsid.service... Sep 13 01:32:55.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.125003 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:32:55.125084 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 01:32:55.134991 systemd[1]: Stopping sysroot-boot.service... Sep 13 01:32:55.153432 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:32:55.153510 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 01:32:55.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.158983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:32:55.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.159032 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 01:32:55.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.180547 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 01:32:55.180649 systemd[1]: Stopped iscsid.service. Sep 13 01:32:55.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.192322 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:32:55.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.481000 audit: BPF prog-id=6 op=UNLOAD Sep 13 01:32:55.192740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:32:55.192827 systemd[1]: Finished initrd-cleanup.service. Sep 13 01:32:55.210339 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:32:55.210423 systemd[1]: Stopped ignition-mount.service. Sep 13 01:32:55.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.215472 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:32:55.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.215523 systemd[1]: Stopped ignition-disks.service. Sep 13 01:32:55.556766 kernel: hv_netvsc 000d3a06-d8a6-000d-3a06-d8a6000d3a06 eth0: Data path switched from VF: enP17766s1 Sep 13 01:32:55.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.228212 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:32:55.228271 systemd[1]: Stopped ignition-kargs.service. Sep 13 01:32:55.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.236699 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:32:55.236745 systemd[1]: Stopped ignition-fetch.service. Sep 13 01:32:55.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.245357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:32:55.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.245409 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 01:32:55.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.255443 systemd[1]: Stopped target paths.target. Sep 13 01:32:55.264387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:32:55.267973 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 01:32:55.274763 systemd[1]: Stopped target slices.target. Sep 13 01:32:55.284866 systemd[1]: Stopped target sockets.target. Sep 13 01:32:55.293657 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:32:55.293707 systemd[1]: Closed iscsid.socket. Sep 13 01:32:55.307520 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:32:55.307572 systemd[1]: Stopped ignition-setup.service. Sep 13 01:32:55.318096 systemd[1]: Stopping iscsiuio.service... Sep 13 01:32:55.329208 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 01:32:55.329308 systemd[1]: Stopped iscsiuio.service. Sep 13 01:32:55.339186 systemd[1]: Stopped target network.target. Sep 13 01:32:55.349245 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:32:55.349280 systemd[1]: Closed iscsiuio.socket. Sep 13 01:32:55.357678 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:32:55.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:32:55.368252 systemd[1]: Stopping systemd-resolved.service... Sep 13 01:32:55.372810 systemd-networkd[870]: eth0: DHCPv6 lease lost Sep 13 01:32:55.708000 audit: BPF prog-id=9 op=UNLOAD Sep 13 01:32:55.378377 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:32:55.378488 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:32:55.387313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:32:55.387350 systemd[1]: Closed systemd-networkd.socket. Sep 13 01:32:55.405460 systemd[1]: Stopping network-cleanup.service... Sep 13 01:32:55.750589 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Sep 13 01:32:55.415184 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:32:55.415264 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 01:32:55.426637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:32:55.426685 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:32:55.441265 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:32:55.441323 systemd[1]: Stopped systemd-modules-load.service. Sep 13 01:32:55.447045 systemd[1]: Stopping systemd-udevd.service... Sep 13 01:32:55.458777 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 01:32:55.459285 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:32:55.459405 systemd[1]: Stopped systemd-resolved.service. Sep 13 01:32:55.468389 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:32:55.468521 systemd[1]: Stopped systemd-udevd.service. Sep 13 01:32:55.482935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:32:55.483013 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 01:32:55.496783 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:32:55.496823 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 01:32:55.506993 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:32:55.507047 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 01:32:55.518227 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:32:55.518275 systemd[1]: Stopped dracut-cmdline.service. Sep 13 01:32:55.528342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:32:55.528389 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 01:32:55.556146 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 01:32:55.568754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:32:55.568832 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 01:32:55.578453 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:32:55.578562 systemd[1]: Stopped sysroot-boot.service. Sep 13 01:32:55.591266 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:32:55.591370 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 01:32:55.601710 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:32:55.601769 systemd[1]: Stopped initrd-setup-root.service. Sep 13 01:32:55.685928 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:32:55.686072 systemd[1]: Stopped network-cleanup.service. Sep 13 01:32:55.694475 systemd[1]: Reached target initrd-switch-root.target. Sep 13 01:32:55.705114 systemd[1]: Starting initrd-switch-root.service... Sep 13 01:32:55.721237 systemd[1]: Switching root. Sep 13 01:32:55.751518 systemd-journald[276]: Journal stopped Sep 13 01:33:09.628918 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 01:33:09.628937 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 01:33:09.628957 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 01:33:09.628967 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:33:09.628975 kernel: SELinux: policy capability open_perms=1 Sep 13 01:33:09.628983 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:33:09.628992 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:33:09.628999 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:33:09.629007 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:33:09.629015 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:33:09.629023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:33:09.629032 kernel: kauditd_printk_skb: 34 callbacks suppressed Sep 13 01:33:09.629040 kernel: audit: type=1403 audit(1757727177.997:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:33:09.629051 systemd[1]: Successfully loaded SELinux policy in 294.028ms. Sep 13 01:33:09.629062 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.487ms. Sep 13 01:33:09.629073 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 01:33:09.629084 systemd[1]: Detected virtualization microsoft. Sep 13 01:33:09.629093 systemd[1]: Detected architecture arm64. Sep 13 01:33:09.629102 systemd[1]: Detected first boot. Sep 13 01:33:09.629111 systemd[1]: Hostname set to . Sep 13 01:33:09.629120 systemd[1]: Initializing machine ID from random generator. Sep 13 01:33:09.629129 kernel: audit: type=1400 audit(1757727178.848:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:09.629140 kernel: audit: type=1400 audit(1757727178.848:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:33:09.629149 kernel: audit: type=1334 audit(1757727178.866:84): prog-id=10 op=LOAD Sep 13 01:33:09.629157 kernel: audit: type=1334 audit(1757727178.866:85): prog-id=10 op=UNLOAD Sep 13 01:33:09.629165 kernel: audit: type=1334 audit(1757727178.883:86): prog-id=11 op=LOAD Sep 13 01:33:09.629173 kernel: audit: type=1334 audit(1757727178.883:87): prog-id=11 op=UNLOAD Sep 13 01:33:09.629181 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 01:33:09.629191 kernel: audit: type=1400 audit(1757727180.245:88): avc: denied { associate } for pid=1104 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:33:09.629201 kernel: audit: type=1300 audit(1757727180.245:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022804 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:09.629211 kernel: audit: type=1327 audit(1757727180.245:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:09.629220 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:33:09.629229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:09.629238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:09.629248 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:09.629258 kernel: kauditd_printk_skb: 6 callbacks suppressed Sep 13 01:33:09.629267 kernel: audit: type=1334 audit(1757727188.850:90): prog-id=12 op=LOAD Sep 13 01:33:09.629277 kernel: audit: type=1334 audit(1757727188.850:91): prog-id=3 op=UNLOAD Sep 13 01:33:09.629285 kernel: audit: type=1334 audit(1757727188.850:92): prog-id=13 op=LOAD Sep 13 01:33:09.629293 kernel: audit: type=1334 audit(1757727188.850:93): prog-id=14 op=LOAD Sep 13 01:33:09.629304 kernel: audit: type=1334 audit(1757727188.850:94): prog-id=4 op=UNLOAD Sep 13 01:33:09.629312 kernel: audit: type=1334 audit(1757727188.850:95): prog-id=5 op=UNLOAD Sep 13 01:33:09.629321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:33:09.629330 kernel: audit: type=1334 audit(1757727188.856:96): prog-id=15 op=LOAD Sep 13 01:33:09.629340 systemd[1]: Stopped initrd-switch-root.service. Sep 13 01:33:09.629349 kernel: audit: type=1334 audit(1757727188.856:97): prog-id=12 op=UNLOAD Sep 13 01:33:09.629357 kernel: audit: type=1334 audit(1757727188.862:98): prog-id=16 op=LOAD Sep 13 01:33:09.629366 kernel: audit: type=1334 audit(1757727188.868:99): prog-id=17 op=LOAD Sep 13 01:33:09.629375 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:33:09.629384 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 01:33:09.629394 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 01:33:09.629404 systemd[1]: Created slice system-getty.slice. Sep 13 01:33:09.629413 systemd[1]: Created slice system-modprobe.slice. Sep 13 01:33:09.629422 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 01:33:09.629432 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 01:33:09.629441 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 01:33:09.629450 systemd[1]: Created slice user.slice. Sep 13 01:33:09.629459 systemd[1]: Started systemd-ask-password-console.path. Sep 13 01:33:09.629469 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 01:33:09.629478 systemd[1]: Set up automount boot.automount. Sep 13 01:33:09.629489 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 01:33:09.629498 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 01:33:09.629507 systemd[1]: Stopped target initrd-fs.target. Sep 13 01:33:09.629516 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 01:33:09.629525 systemd[1]: Reached target integritysetup.target. Sep 13 01:33:09.629535 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 01:33:09.629544 systemd[1]: Reached target remote-fs.target. Sep 13 01:33:09.629553 systemd[1]: Reached target slices.target. Sep 13 01:33:09.629563 systemd[1]: Reached target swap.target. Sep 13 01:33:09.629572 systemd[1]: Reached target torcx.target. Sep 13 01:33:09.629581 systemd[1]: Reached target veritysetup.target. Sep 13 01:33:09.629591 systemd[1]: Listening on systemd-coredump.socket. Sep 13 01:33:09.629600 systemd[1]: Listening on systemd-initctl.socket. Sep 13 01:33:09.629609 systemd[1]: Listening on systemd-networkd.socket. Sep 13 01:33:09.629619 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 01:33:09.629629 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 01:33:09.629638 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 01:33:09.629647 systemd[1]: Mounting dev-hugepages.mount... Sep 13 01:33:09.629657 systemd[1]: Mounting dev-mqueue.mount... Sep 13 01:33:09.629667 systemd[1]: Mounting media.mount... Sep 13 01:33:09.629676 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 01:33:09.629686 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 01:33:09.629696 systemd[1]: Mounting tmp.mount... Sep 13 01:33:09.629705 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 01:33:09.629714 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:09.629724 systemd[1]: Starting kmod-static-nodes.service... Sep 13 01:33:09.629733 systemd[1]: Starting modprobe@configfs.service... Sep 13 01:33:09.629742 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:09.629751 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:09.629760 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:09.629769 systemd[1]: Starting modprobe@fuse.service... Sep 13 01:33:09.629780 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:09.629790 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:33:09.629799 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:33:09.629809 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 01:33:09.629818 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:33:09.629827 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:33:09.629836 systemd[1]: Stopped systemd-journald.service. Sep 13 01:33:09.629845 systemd[1]: systemd-journald.service: Consumed 3.297s CPU time. Sep 13 01:33:09.629856 systemd[1]: Starting systemd-journald.service... Sep 13 01:33:09.629866 kernel: loop: module loaded Sep 13 01:33:09.629874 kernel: fuse: init (API version 7.34) Sep 13 01:33:09.629883 systemd[1]: Starting systemd-modules-load.service... Sep 13 01:33:09.629892 systemd[1]: Starting systemd-network-generator.service... Sep 13 01:33:09.629902 systemd[1]: Starting systemd-remount-fs.service... Sep 13 01:33:09.629911 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 01:33:09.629920 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:33:09.629929 systemd[1]: Stopped verity-setup.service. Sep 13 01:33:09.629958 systemd[1]: Mounted dev-hugepages.mount. Sep 13 01:33:09.629968 systemd[1]: Mounted dev-mqueue.mount. Sep 13 01:33:09.629978 systemd[1]: Mounted media.mount. Sep 13 01:33:09.629987 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 01:33:09.629997 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 01:33:09.630010 systemd-journald[1210]: Journal started Sep 13 01:33:09.630047 systemd-journald[1210]: Runtime Journal (/run/log/journal/a8407bc7aa3d49d49586ad8adfb15f9a) is 8.0M, max 78.5M, 70.5M free. Sep 13 01:32:57.997000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:32:58.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:32:58.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 01:32:58.866000 audit: BPF prog-id=10 op=LOAD Sep 13 01:32:58.866000 audit: BPF prog-id=10 op=UNLOAD Sep 13 01:32:58.883000 audit: BPF prog-id=11 op=LOAD Sep 13 01:32:58.883000 audit: BPF prog-id=11 op=UNLOAD Sep 13 01:33:00.245000 audit[1104]: AVC avc: denied { associate } for pid=1104 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 01:33:00.245000 audit[1104]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022804 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:00.245000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:00.255000 audit[1104]: AVC avc: denied { associate } for pid=1104 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 01:33:00.255000 audit[1104]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:00.255000 audit: CWD cwd="/" Sep 13 01:33:00.255000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:00.255000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:00.255000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 01:33:08.850000 audit: BPF prog-id=12 op=LOAD Sep 13 01:33:08.850000 audit: BPF prog-id=3 op=UNLOAD Sep 13 01:33:08.850000 audit: BPF prog-id=13 op=LOAD Sep 13 01:33:08.850000 audit: BPF prog-id=14 op=LOAD Sep 13 01:33:08.850000 audit: BPF prog-id=4 op=UNLOAD Sep 13 01:33:08.850000 audit: BPF prog-id=5 op=UNLOAD Sep 13 01:33:08.856000 audit: BPF prog-id=15 op=LOAD Sep 13 01:33:08.856000 audit: BPF prog-id=12 op=UNLOAD Sep 13 01:33:08.862000 audit: BPF prog-id=16 op=LOAD Sep 13 01:33:08.868000 audit: BPF prog-id=17 op=LOAD Sep 13 01:33:08.868000 audit: BPF prog-id=13 op=UNLOAD Sep 13 01:33:08.868000 audit: BPF prog-id=14 op=UNLOAD Sep 13 01:33:08.874000 audit: BPF prog-id=18 op=LOAD Sep 13 01:33:08.874000 audit: BPF prog-id=15 op=UNLOAD Sep 13 01:33:08.880000 audit: BPF prog-id=19 op=LOAD Sep 13 01:33:08.885000 audit: BPF prog-id=20 op=LOAD Sep 13 01:33:08.885000 audit: BPF prog-id=16 op=UNLOAD Sep 13 01:33:08.885000 audit: BPF prog-id=17 op=UNLOAD Sep 13 01:33:08.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.913000 audit: BPF prog-id=18 op=UNLOAD Sep 13 01:33:08.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.484000 audit: BPF prog-id=21 op=LOAD Sep 13 01:33:09.484000 audit: BPF prog-id=22 op=LOAD Sep 13 01:33:09.484000 audit: BPF prog-id=23 op=LOAD Sep 13 01:33:09.484000 audit: BPF prog-id=19 op=UNLOAD Sep 13 01:33:09.484000 audit: BPF prog-id=20 op=UNLOAD Sep 13 01:33:09.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:08.849729 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:33:00.149488 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:33:08.849742 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 13 01:33:00.183977 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:33:08.887133 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:33:09.626000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 01:33:09.626000 audit[1210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc6a471c0 a2=4000 a3=1 items=0 ppid=1 pid=1210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:09.626000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 01:33:00.184003 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:33:08.887514 systemd[1]: systemd-journald.service: Consumed 3.297s CPU time. Sep 13 01:33:00.184044 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 01:33:00.184056 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 01:33:00.184102 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 01:33:00.184116 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 01:33:00.184323 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 01:33:00.184357 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 01:33:00.184368 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 01:33:00.234562 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 01:33:00.234609 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 01:33:00.234633 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 01:33:00.234647 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 01:33:00.234672 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 01:33:00.234685 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 01:33:05.485177 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:05.485442 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:05.485543 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:05.485705 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 01:33:05.485755 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 01:33:05.485810 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-09-13T01:33:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 01:33:09.640811 systemd[1]: Started systemd-journald.service. Sep 13 01:33:09.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.641582 systemd[1]: Mounted tmp.mount. Sep 13 01:33:09.645473 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 01:33:09.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.650394 systemd[1]: Finished kmod-static-nodes.service. Sep 13 01:33:09.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.655638 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:33:09.655768 systemd[1]: Finished modprobe@configfs.service. Sep 13 01:33:09.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.661055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:09.661183 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:09.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.666005 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:09.666126 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:09.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.671425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:09.671549 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:09.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.677843 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:33:09.678157 systemd[1]: Finished modprobe@fuse.service. Sep 13 01:33:09.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.684130 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:09.684253 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:09.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.689529 systemd[1]: Finished systemd-network-generator.service. Sep 13 01:33:09.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.695140 systemd[1]: Finished systemd-remount-fs.service. Sep 13 01:33:09.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.700440 systemd[1]: Reached target network-pre.target. Sep 13 01:33:09.706560 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 01:33:09.712111 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 01:33:09.716355 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:33:09.742248 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 01:33:09.747895 systemd[1]: Starting systemd-journal-flush.service... Sep 13 01:33:09.752803 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:09.754083 systemd[1]: Starting systemd-random-seed.service... Sep 13 01:33:09.758665 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:09.759918 systemd[1]: Starting systemd-sysusers.service... Sep 13 01:33:09.766236 systemd[1]: Finished systemd-modules-load.service. Sep 13 01:33:09.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.772303 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 01:33:09.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.778561 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 01:33:09.784563 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 01:33:09.790474 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:33:09.796957 systemd[1]: Starting systemd-udev-settle.service... Sep 13 01:33:09.814160 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 01:33:09.824436 systemd[1]: Finished systemd-random-seed.service. Sep 13 01:33:09.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.830031 systemd[1]: Reached target first-boot-complete.target. Sep 13 01:33:09.830516 systemd-journald[1210]: Time spent on flushing to /var/log/journal/a8407bc7aa3d49d49586ad8adfb15f9a is 14.918ms for 1113 entries. Sep 13 01:33:09.830516 systemd-journald[1210]: System Journal (/var/log/journal/a8407bc7aa3d49d49586ad8adfb15f9a) is 8.0M, max 2.6G, 2.6G free. Sep 13 01:33:09.918513 systemd-journald[1210]: Received client request to flush runtime journal. Sep 13 01:33:09.919533 systemd[1]: Finished systemd-journal-flush.service. Sep 13 01:33:09.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:09.969903 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:33:09.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:10.585764 systemd[1]: Finished systemd-sysusers.service. Sep 13 01:33:10.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:11.230997 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 01:33:11.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:11.235000 audit: BPF prog-id=24 op=LOAD Sep 13 01:33:11.235000 audit: BPF prog-id=25 op=LOAD Sep 13 01:33:11.235000 audit: BPF prog-id=7 op=UNLOAD Sep 13 01:33:11.235000 audit: BPF prog-id=8 op=UNLOAD Sep 13 01:33:11.237563 systemd[1]: Starting systemd-udevd.service... Sep 13 01:33:11.255630 systemd-udevd[1227]: Using default interface naming scheme 'v252'. Sep 13 01:33:12.190066 systemd[1]: Started systemd-udevd.service. Sep 13 01:33:12.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:12.194000 audit: BPF prog-id=26 op=LOAD Sep 13 01:33:12.198005 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:12.233162 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 13 01:33:12.295000 audit: BPF prog-id=27 op=LOAD Sep 13 01:33:12.295000 audit: BPF prog-id=28 op=LOAD Sep 13 01:33:12.295000 audit: BPF prog-id=29 op=LOAD Sep 13 01:33:12.297448 systemd[1]: Starting systemd-userdbd.service... Sep 13 01:33:12.302000 audit[1248]: AVC avc: denied { confidentiality } for pid=1248 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 01:33:12.314000 kernel: hv_vmbus: registering driver hv_balloon Sep 13 01:33:12.314089 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 01:33:12.325073 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 13 01:33:12.302000 audit[1248]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad162a6b0 a1=aa2c a2=ffff84df24b0 a3=aaaad1580010 items=12 ppid=1227 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:12.302000 audit: CWD cwd="/" Sep 13 01:33:12.302000 audit: PATH item=0 name=(null) inode=6743 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=1 name=(null) inode=9711 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=2 name=(null) inode=9711 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=3 name=(null) inode=9712 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=4 name=(null) inode=9711 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=5 name=(null) inode=9713 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=6 name=(null) inode=9711 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=7 name=(null) inode=9714 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=8 name=(null) inode=9711 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=9 name=(null) inode=9715 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=10 name=(null) inode=9711 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PATH item=11 name=(null) inode=9716 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 01:33:12.302000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 01:33:12.335968 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:33:12.343984 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 01:33:12.357458 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 01:33:12.357617 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 01:33:12.359049 kernel: Console: switching to colour dummy device 80x25 Sep 13 01:33:12.366964 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 01:33:12.383690 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 01:33:12.383771 kernel: hv_vmbus: registering driver hv_utils Sep 13 01:33:12.384981 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 01:33:12.385032 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 01:33:12.385059 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 01:33:12.898676 systemd[1]: Started systemd-userdbd.service. Sep 13 01:33:12.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:13.201472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 01:33:13.211537 systemd[1]: Finished systemd-udev-settle.service. Sep 13 01:33:13.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:13.217754 systemd[1]: Starting lvm2-activation-early.service... Sep 13 01:33:13.506288 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:13.525236 systemd-networkd[1243]: lo: Link UP Sep 13 01:33:13.525245 systemd-networkd[1243]: lo: Gained carrier Sep 13 01:33:13.525669 systemd-networkd[1243]: Enumeration completed Sep 13 01:33:13.525862 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:13.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:13.532107 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:13.552361 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:13.576040 systemd[1]: Finished lvm2-activation-early.service. Sep 13 01:33:13.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:13.581755 systemd[1]: Reached target cryptsetup.target. Sep 13 01:33:13.588612 systemd[1]: Starting lvm2-activation.service... Sep 13 01:33:13.593246 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:33:13.608230 kernel: mlx5_core 4566:00:02.0 enP17766s1: Link up Sep 13 01:33:13.608499 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 01:33:13.618070 systemd[1]: Finished lvm2-activation.service. Sep 13 01:33:13.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:13.623059 systemd[1]: Reached target local-fs-pre.target. Sep 13 01:33:13.627940 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:33:13.627971 systemd[1]: Reached target local-fs.target. Sep 13 01:33:13.641716 systemd[1]: Reached target machines.target. Sep 13 01:33:13.642306 kernel: hv_netvsc 000d3a06-d8a6-000d-3a06-d8a6000d3a06 eth0: Data path switched to VF: enP17766s1 Sep 13 01:33:13.643636 systemd-networkd[1243]: enP17766s1: Link UP Sep 13 01:33:13.644054 systemd-networkd[1243]: eth0: Link UP Sep 13 01:33:13.644161 systemd-networkd[1243]: eth0: Gained carrier Sep 13 01:33:13.647735 systemd[1]: Starting ldconfig.service... Sep 13 01:33:13.649038 systemd-networkd[1243]: enP17766s1: Gained carrier Sep 13 01:33:13.657228 systemd-networkd[1243]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:13.676195 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:13.676296 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:13.677554 systemd[1]: Starting systemd-boot-update.service... Sep 13 01:33:13.683323 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 01:33:13.690594 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 01:33:13.697060 systemd[1]: Starting systemd-sysext.service... Sep 13 01:33:13.728296 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1308 (bootctl) Sep 13 01:33:13.729643 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 01:33:14.056591 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 01:33:14.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.065286 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 01:33:14.116913 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 01:33:14.117177 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 01:33:14.138250 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:33:14.138947 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 01:33:14.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.189132 kernel: loop0: detected capacity change from 0 to 207008 Sep 13 01:33:14.248138 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:33:14.274127 kernel: loop1: detected capacity change from 0 to 207008 Sep 13 01:33:14.284790 systemd-fsck[1316]: fsck.fat 4.2 (2021-01-31) Sep 13 01:33:14.284790 systemd-fsck[1316]: /dev/sda1: 236 files, 117310/258078 clusters Sep 13 01:33:14.288172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 01:33:14.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.296330 systemd[1]: Mounting boot.mount... Sep 13 01:33:14.300933 (sd-sysext)[1320]: Using extensions 'kubernetes'. Sep 13 01:33:14.301329 (sd-sysext)[1320]: Merged extensions into '/usr'. Sep 13 01:33:14.323462 systemd[1]: Mounted boot.mount. Sep 13 01:33:14.329010 systemd[1]: Mounting usr-share-oem.mount... Sep 13 01:33:14.335162 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.336625 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:14.342305 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:14.348115 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:14.352404 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.352554 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:14.355355 systemd[1]: Finished systemd-boot-update.service. Sep 13 01:33:14.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.360924 systemd[1]: Mounted usr-share-oem.mount. Sep 13 01:33:14.365788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:14.365922 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:14.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.373505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:14.373631 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:14.374965 kernel: kauditd_printk_skb: 83 callbacks suppressed Sep 13 01:33:14.375026 kernel: audit: type=1130 audit(1757727194.369:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.416777 kernel: audit: type=1131 audit(1757727194.372:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.417952 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:14.418310 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:14.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.455805 kernel: audit: type=1130 audit(1757727194.416:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.455887 kernel: audit: type=1131 audit(1757727194.416:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.458141 systemd[1]: Finished systemd-sysext.service. Sep 13 01:33:14.473233 kernel: audit: type=1130 audit(1757727194.455:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.473314 kernel: audit: type=1131 audit(1757727194.455:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.500437 systemd[1]: Starting ensure-sysext.service... Sep 13 01:33:14.519695 kernel: audit: type=1130 audit(1757727194.495:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.520278 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:14.520355 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.521618 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 01:33:14.528539 systemd[1]: Reloading. Sep 13 01:33:14.594575 /usr/lib/systemd/system-generators/torcx-generator[1351]: time="2025-09-13T01:33:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:33:14.598323 /usr/lib/systemd/system-generators/torcx-generator[1351]: time="2025-09-13T01:33:14Z" level=info msg="torcx already run" Sep 13 01:33:14.670351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:33:14.670540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:33:14.686058 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:33:14.751000 audit: BPF prog-id=30 op=LOAD Sep 13 01:33:14.757000 audit: BPF prog-id=31 op=LOAD Sep 13 01:33:14.764917 kernel: audit: type=1334 audit(1757727194.751:173): prog-id=30 op=LOAD Sep 13 01:33:14.765023 kernel: audit: type=1334 audit(1757727194.757:174): prog-id=31 op=LOAD Sep 13 01:33:14.757000 audit: BPF prog-id=24 op=UNLOAD Sep 13 01:33:14.757000 audit: BPF prog-id=25 op=UNLOAD Sep 13 01:33:14.764000 audit: BPF prog-id=32 op=LOAD Sep 13 01:33:14.764000 audit: BPF prog-id=21 op=UNLOAD Sep 13 01:33:14.764000 audit: BPF prog-id=33 op=LOAD Sep 13 01:33:14.764000 audit: BPF prog-id=34 op=LOAD Sep 13 01:33:14.764000 audit: BPF prog-id=22 op=UNLOAD Sep 13 01:33:14.764000 audit: BPF prog-id=23 op=UNLOAD Sep 13 01:33:14.770000 audit: BPF prog-id=35 op=LOAD Sep 13 01:33:14.770000 audit: BPF prog-id=27 op=UNLOAD Sep 13 01:33:14.770000 audit: BPF prog-id=36 op=LOAD Sep 13 01:33:14.770000 audit: BPF prog-id=37 op=LOAD Sep 13 01:33:14.770000 audit: BPF prog-id=28 op=UNLOAD Sep 13 01:33:14.770000 audit: BPF prog-id=29 op=UNLOAD Sep 13 01:33:14.770000 audit: BPF prog-id=38 op=LOAD Sep 13 01:33:14.770000 audit: BPF prog-id=26 op=UNLOAD Sep 13 01:33:14.772116 kernel: audit: type=1334 audit(1757727194.757:175): prog-id=24 op=UNLOAD Sep 13 01:33:14.787813 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.789737 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:14.795505 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:14.800682 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 01:33:14.802511 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:14.806543 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.806687 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:14.807544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:14.807692 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:14.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.812860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:14.812993 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:14.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.818560 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:14.818683 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:14.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.824944 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.826570 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:14.832516 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:14.838253 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:14.842420 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.842565 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:14.843424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:14.843573 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:14.848889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:14.849017 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:14.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.854879 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:14.855213 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:14.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.865903 systemd[1]: Finished ensure-sysext.service. Sep 13 01:33:14.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.871921 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.873590 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 01:33:14.879388 systemd[1]: Starting modprobe@drm.service... Sep 13 01:33:14.885387 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 01:33:14.891328 systemd[1]: Starting modprobe@loop.service... Sep 13 01:33:14.896666 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 01:33:14.896864 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:14.897515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:33:14.897764 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 01:33:14.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.903250 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:33:14.903484 systemd[1]: Finished modprobe@drm.service. Sep 13 01:33:14.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.909196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:33:14.909426 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 01:33:14.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.915359 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:33:14.915588 systemd[1]: Finished modprobe@loop.service. Sep 13 01:33:14.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:14.920887 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:33:14.921010 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 01:33:15.022248 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:33:15.230854 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:33:15.378273 systemd-networkd[1243]: eth0: Gained IPv6LL Sep 13 01:33:15.384058 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:15.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:17.861945 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 01:33:17.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:17.869305 systemd[1]: Starting audit-rules.service... Sep 13 01:33:17.875308 systemd[1]: Starting clean-ca-certificates.service... Sep 13 01:33:17.881268 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 01:33:17.886000 audit: BPF prog-id=39 op=LOAD Sep 13 01:33:17.890404 systemd[1]: Starting systemd-resolved.service... Sep 13 01:33:17.894000 audit: BPF prog-id=40 op=LOAD Sep 13 01:33:17.897046 systemd[1]: Starting systemd-timesyncd.service... Sep 13 01:33:17.903013 systemd[1]: Starting systemd-update-utmp.service... Sep 13 01:33:17.947000 audit[1427]: SYSTEM_BOOT pid=1427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 01:33:17.951173 systemd[1]: Finished systemd-update-utmp.service. Sep 13 01:33:17.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:17.974215 systemd[1]: Finished clean-ca-certificates.service. Sep 13 01:33:17.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:17.979812 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:33:17.999858 systemd[1]: Started systemd-timesyncd.service. Sep 13 01:33:18.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:18.004937 systemd[1]: Reached target time-set.target. Sep 13 01:33:18.071332 systemd-resolved[1424]: Positive Trust Anchors: Sep 13 01:33:18.071702 systemd-resolved[1424]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:33:18.071779 systemd-resolved[1424]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 01:33:18.136805 systemd-resolved[1424]: Using system hostname 'ci-3510.3.8-n-8e33b0f951'. Sep 13 01:33:18.138749 systemd[1]: Started systemd-resolved.service. Sep 13 01:33:18.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:18.143805 systemd[1]: Reached target network.target. Sep 13 01:33:18.148391 systemd[1]: Reached target network-online.target. Sep 13 01:33:18.153599 systemd[1]: Reached target nss-lookup.target. Sep 13 01:33:18.250168 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 01:33:18.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 01:33:18.428000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 01:33:18.428000 audit[1442]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffbbfb2c0 a2=420 a3=0 items=0 ppid=1421 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 01:33:18.428000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 01:33:18.430460 augenrules[1442]: No rules Sep 13 01:33:18.431407 systemd[1]: Finished audit-rules.service. Sep 13 01:33:23.507362 ldconfig[1307]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:33:23.519996 systemd[1]: Finished ldconfig.service. Sep 13 01:33:23.526377 systemd[1]: Starting systemd-update-done.service... Sep 13 01:33:23.582157 systemd[1]: Finished systemd-update-done.service. Sep 13 01:33:23.587361 systemd[1]: Reached target sysinit.target. Sep 13 01:33:23.591886 systemd[1]: Started motdgen.path. Sep 13 01:33:23.595996 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 01:33:23.602605 systemd[1]: Started logrotate.timer. Sep 13 01:33:23.606929 systemd[1]: Started mdadm.timer. Sep 13 01:33:23.611142 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 01:33:23.616780 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:33:23.616811 systemd[1]: Reached target paths.target. Sep 13 01:33:23.621319 systemd[1]: Reached target timers.target. Sep 13 01:33:23.626181 systemd[1]: Listening on dbus.socket. Sep 13 01:33:23.631598 systemd[1]: Starting docker.socket... Sep 13 01:33:23.661853 systemd[1]: Listening on sshd.socket. Sep 13 01:33:23.666470 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:23.667001 systemd[1]: Listening on docker.socket. Sep 13 01:33:23.671727 systemd[1]: Reached target sockets.target. Sep 13 01:33:23.676646 systemd[1]: Reached target basic.target. Sep 13 01:33:23.681161 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:23.681192 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 01:33:23.682396 systemd[1]: Starting containerd.service... Sep 13 01:33:23.687915 systemd[1]: Starting dbus.service... Sep 13 01:33:23.692595 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 01:33:23.698418 systemd[1]: Starting extend-filesystems.service... Sep 13 01:33:23.703234 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 01:33:23.716486 systemd[1]: Starting kubelet.service... Sep 13 01:33:23.721602 systemd[1]: Starting motdgen.service... Sep 13 01:33:23.726558 systemd[1]: Started nvidia.service. Sep 13 01:33:23.733228 systemd[1]: Starting prepare-helm.service... Sep 13 01:33:23.739558 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 01:33:23.745634 systemd[1]: Starting sshd-keygen.service... Sep 13 01:33:23.752602 systemd[1]: Starting systemd-logind.service... Sep 13 01:33:23.757486 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 01:33:23.757556 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:33:23.758034 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:33:23.759088 systemd[1]: Starting update-engine.service... Sep 13 01:33:23.768283 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 01:33:23.778717 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:33:23.779722 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 01:33:23.787883 jq[1452]: false Sep 13 01:33:23.788281 jq[1467]: true Sep 13 01:33:23.802568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:33:23.802740 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 01:33:23.834714 jq[1475]: true Sep 13 01:33:23.836647 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:33:23.836826 systemd[1]: Finished motdgen.service. Sep 13 01:33:23.843377 extend-filesystems[1453]: Found loop1 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda1 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda2 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda3 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found usr Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda4 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda6 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda7 Sep 13 01:33:23.843377 extend-filesystems[1453]: Found sda9 Sep 13 01:33:23.843377 extend-filesystems[1453]: Checking size of /dev/sda9 Sep 13 01:33:23.978951 tar[1473]: linux-arm64/LICENSE Sep 13 01:33:23.978951 tar[1473]: linux-arm64/helm Sep 13 01:33:23.983070 env[1477]: time="2025-09-13T01:33:23.914743940Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 01:33:23.890783 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 13 01:33:23.895241 systemd-logind[1462]: New seat seat0. Sep 13 01:33:23.992878 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:33:23.992816 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 01:33:24.000370 extend-filesystems[1453]: Old size kept for /dev/sda9 Sep 13 01:33:24.000370 extend-filesystems[1453]: Found sr0 Sep 13 01:33:24.001582 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:33:24.052448 env[1477]: time="2025-09-13T01:33:24.048958420Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:33:24.052448 env[1477]: time="2025-09-13T01:33:24.049139220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:24.001736 systemd[1]: Finished extend-filesystems.service. Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.057865340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.057967140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062317740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062360020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062377020Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062387260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062511980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062708300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062858860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:33:24.065118 env[1477]: time="2025-09-13T01:33:24.062874660Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:33:24.065482 env[1477]: time="2025-09-13T01:33:24.062930900Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 01:33:24.065482 env[1477]: time="2025-09-13T01:33:24.062944060Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080461940Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080522180Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080536100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080573820Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080668500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080684140Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.080700340Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081093460Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081145300Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081161220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081174060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081196340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081355860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:33:24.082357 env[1477]: time="2025-09-13T01:33:24.081447540Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081698020Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081724540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081748940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081796260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081818740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081832500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081851420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081863740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081875580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081894180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081908540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.081922260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.082076540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.082095020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082702 env[1477]: time="2025-09-13T01:33:24.082135740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.082978 env[1477]: time="2025-09-13T01:33:24.082147380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:33:24.082978 env[1477]: time="2025-09-13T01:33:24.082162460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 01:33:24.082978 env[1477]: time="2025-09-13T01:33:24.082175820Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:33:24.082978 env[1477]: time="2025-09-13T01:33:24.082202300Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 01:33:24.082978 env[1477]: time="2025-09-13T01:33:24.082243020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:33:24.083927 env[1477]: time="2025-09-13T01:33:24.083259660Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:33:24.083927 env[1477]: time="2025-09-13T01:33:24.083334340Z" level=info msg="Connect containerd service" Sep 13 01:33:24.083927 env[1477]: time="2025-09-13T01:33:24.083409340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084641540Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084727300Z" level=info msg="Start subscribing containerd event" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084768020Z" level=info msg="Start recovering state" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084839580Z" level=info msg="Start event monitor" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084858340Z" level=info msg="Start snapshots syncer" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084878460Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.084887420Z" level=info msg="Start streaming server" Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.085289540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.085328300Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:33:24.099834 env[1477]: time="2025-09-13T01:33:24.085391500Z" level=info msg="containerd successfully booted in 0.179756s" Sep 13 01:33:24.085476 systemd[1]: Started containerd.service. Sep 13 01:33:24.152819 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 01:33:24.395623 dbus-daemon[1451]: [system] SELinux support is enabled Sep 13 01:33:24.395814 systemd[1]: Started dbus.service. Sep 13 01:33:24.401995 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:33:24.402022 systemd[1]: Reached target system-config.target. Sep 13 01:33:24.410492 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:33:24.410519 systemd[1]: Reached target user-config.target. Sep 13 01:33:24.415571 update_engine[1463]: I0913 01:33:24.403030 1463 main.cc:92] Flatcar Update Engine starting Sep 13 01:33:24.419169 systemd[1]: Started systemd-logind.service. Sep 13 01:33:24.476008 systemd[1]: Started update-engine.service. Sep 13 01:33:24.476321 update_engine[1463]: I0913 01:33:24.476073 1463 update_check_scheduler.cc:74] Next update check in 2m58s Sep 13 01:33:24.485521 systemd[1]: Started locksmithd.service. Sep 13 01:33:24.630501 tar[1473]: linux-arm64/README.md Sep 13 01:33:24.635941 systemd[1]: Finished prepare-helm.service. Sep 13 01:33:24.912456 systemd[1]: Started kubelet.service. Sep 13 01:33:25.412983 kubelet[1558]: E0913 01:33:25.412922 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:25.414630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:25.414764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:25.866282 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:33:26.821002 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:33:26.839171 systemd[1]: Finished sshd-keygen.service. Sep 13 01:33:26.845481 systemd[1]: Starting issuegen.service... Sep 13 01:33:26.850777 systemd[1]: Started waagent.service. Sep 13 01:33:26.855613 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:33:26.855787 systemd[1]: Finished issuegen.service. Sep 13 01:33:26.862035 systemd[1]: Starting systemd-user-sessions.service... Sep 13 01:33:26.907429 systemd[1]: Finished systemd-user-sessions.service. Sep 13 01:33:26.914412 systemd[1]: Started getty@tty1.service. Sep 13 01:33:26.920693 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 13 01:33:26.927918 systemd[1]: Reached target getty.target. Sep 13 01:33:26.932717 systemd[1]: Reached target multi-user.target. Sep 13 01:33:26.938931 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 01:33:26.947916 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 01:33:26.948084 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 01:33:26.957536 systemd[1]: Startup finished in 775ms (kernel) + 14.881s (initrd) + 28.913s (userspace) = 44.570s. Sep 13 01:33:27.751942 login[1582]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 13 01:33:27.777057 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:27.908135 systemd[1]: Created slice user-500.slice. Sep 13 01:33:27.909364 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 01:33:27.911797 systemd-logind[1462]: New session 1 of user core. Sep 13 01:33:27.975434 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 01:33:27.977001 systemd[1]: Starting user@500.service... Sep 13 01:33:28.032271 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:28.450876 systemd-timesyncd[1426]: Timed out waiting for reply from 23.168.24.210:123 (0.flatcar.pool.ntp.org). Sep 13 01:33:28.461696 systemd[1585]: Queued start job for default target default.target. Sep 13 01:33:28.462574 systemd[1585]: Reached target paths.target. Sep 13 01:33:28.462601 systemd[1585]: Reached target sockets.target. Sep 13 01:33:28.462613 systemd[1585]: Reached target timers.target. Sep 13 01:33:28.462623 systemd[1585]: Reached target basic.target. Sep 13 01:33:28.462672 systemd[1585]: Reached target default.target. Sep 13 01:33:28.462697 systemd[1585]: Startup finished in 423ms. Sep 13 01:33:28.462741 systemd[1]: Started user@500.service. Sep 13 01:33:28.463761 systemd[1]: Started session-1.scope. Sep 13 01:33:28.522650 systemd-timesyncd[1426]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Sep 13 01:33:28.522893 systemd-timesyncd[1426]: Initial clock synchronization to Sat 2025-09-13 01:33:28.523932 UTC. Sep 13 01:33:28.753694 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:33:28.757242 systemd-logind[1462]: New session 2 of user core. Sep 13 01:33:28.758067 systemd[1]: Started session-2.scope. Sep 13 01:33:35.451274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:33:35.451438 systemd[1]: Stopped kubelet.service. Sep 13 01:33:35.452874 systemd[1]: Starting kubelet.service... Sep 13 01:33:35.986130 systemd[1]: Started kubelet.service. Sep 13 01:33:36.032082 kubelet[1612]: E0913 01:33:36.032021 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:36.034940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:36.035091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:36.478174 waagent[1579]: 2025-09-13T01:33:36.478045Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Sep 13 01:33:36.523468 waagent[1579]: 2025-09-13T01:33:36.523371Z INFO Daemon Daemon OS: flatcar 3510.3.8 Sep 13 01:33:36.528379 waagent[1579]: 2025-09-13T01:33:36.528295Z INFO Daemon Daemon Python: 3.9.16 Sep 13 01:33:36.533510 waagent[1579]: 2025-09-13T01:33:36.533393Z INFO Daemon Daemon Run daemon Sep 13 01:33:36.538784 waagent[1579]: 2025-09-13T01:33:36.538710Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Sep 13 01:33:36.567863 waagent[1579]: 2025-09-13T01:33:36.567700Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:36.584035 waagent[1579]: 2025-09-13T01:33:36.583871Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:36.594561 waagent[1579]: 2025-09-13T01:33:36.594474Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:36.600033 waagent[1579]: 2025-09-13T01:33:36.599949Z INFO Daemon Daemon Using waagent for provisioning Sep 13 01:33:36.606303 waagent[1579]: 2025-09-13T01:33:36.606226Z INFO Daemon Daemon Activate resource disk Sep 13 01:33:36.611388 waagent[1579]: 2025-09-13T01:33:36.611309Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 01:33:36.626456 waagent[1579]: 2025-09-13T01:33:36.626359Z INFO Daemon Daemon Found device: None Sep 13 01:33:36.631572 waagent[1579]: 2025-09-13T01:33:36.631487Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 01:33:36.641023 waagent[1579]: 2025-09-13T01:33:36.640939Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 01:33:36.653928 waagent[1579]: 2025-09-13T01:33:36.653856Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:36.660219 waagent[1579]: 2025-09-13T01:33:36.660125Z INFO Daemon Daemon Running default provisioning handler Sep 13 01:33:36.675573 waagent[1579]: 2025-09-13T01:33:36.675398Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Sep 13 01:33:36.691593 waagent[1579]: 2025-09-13T01:33:36.691428Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 01:33:36.702350 waagent[1579]: 2025-09-13T01:33:36.702254Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 01:33:36.708341 waagent[1579]: 2025-09-13T01:33:36.708258Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 01:33:36.856741 waagent[1579]: 2025-09-13T01:33:36.855508Z INFO Daemon Daemon Successfully mounted dvd Sep 13 01:33:36.977866 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 01:33:37.019393 waagent[1579]: 2025-09-13T01:33:37.019245Z INFO Daemon Daemon Detect protocol endpoint Sep 13 01:33:37.025031 waagent[1579]: 2025-09-13T01:33:37.024930Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 01:33:37.031754 waagent[1579]: 2025-09-13T01:33:37.031659Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 01:33:37.039182 waagent[1579]: 2025-09-13T01:33:37.039079Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 01:33:37.045531 waagent[1579]: 2025-09-13T01:33:37.045442Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 01:33:37.051171 waagent[1579]: 2025-09-13T01:33:37.051065Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 01:33:37.181589 waagent[1579]: 2025-09-13T01:33:37.181455Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 01:33:37.189507 waagent[1579]: 2025-09-13T01:33:37.189451Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 01:33:37.195337 waagent[1579]: 2025-09-13T01:33:37.195250Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 01:33:38.010452 waagent[1579]: 2025-09-13T01:33:38.010287Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 01:33:38.027839 waagent[1579]: 2025-09-13T01:33:38.027742Z INFO Daemon Daemon Forcing an update of the goal state.. Sep 13 01:33:38.034391 waagent[1579]: 2025-09-13T01:33:38.034288Z INFO Daemon Daemon Fetching goal state [incarnation 1] Sep 13 01:33:38.200929 waagent[1579]: 2025-09-13T01:33:38.200770Z INFO Daemon Daemon Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:38.210178 waagent[1579]: 2025-09-13T01:33:38.210061Z INFO Daemon Daemon Fetch goal state completed Sep 13 01:33:38.287974 waagent[1579]: 2025-09-13T01:33:38.287858Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: a42d089a-3f09-465a-aedf-8b387031d106 New eTag: 13035000026051838001] Sep 13 01:33:38.299155 waagent[1579]: 2025-09-13T01:33:38.299036Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:38.352565 waagent[1579]: 2025-09-13T01:33:38.352494Z INFO Daemon Daemon Starting provisioning Sep 13 01:33:38.358235 waagent[1579]: 2025-09-13T01:33:38.358141Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 01:33:38.363432 waagent[1579]: 2025-09-13T01:33:38.363352Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-8e33b0f951] Sep 13 01:33:38.420074 waagent[1579]: 2025-09-13T01:33:38.419921Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-8e33b0f951] Sep 13 01:33:38.426799 waagent[1579]: 2025-09-13T01:33:38.426700Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 01:33:38.433783 waagent[1579]: 2025-09-13T01:33:38.433698Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 01:33:38.451848 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Sep 13 01:33:38.452041 systemd[1]: Stopped systemd-networkd-wait-online.service. Sep 13 01:33:38.452123 systemd[1]: Stopping systemd-networkd-wait-online.service... Sep 13 01:33:38.452373 systemd[1]: Stopping systemd-networkd.service... Sep 13 01:33:38.458148 systemd-networkd[1243]: eth0: DHCPv6 lease lost Sep 13 01:33:38.460093 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:33:38.460314 systemd[1]: Stopped systemd-networkd.service. Sep 13 01:33:38.462488 systemd[1]: Starting systemd-networkd.service... Sep 13 01:33:38.491567 systemd-networkd[1637]: enP17766s1: Link UP Sep 13 01:33:38.491582 systemd-networkd[1637]: enP17766s1: Gained carrier Sep 13 01:33:38.492706 systemd-networkd[1637]: eth0: Link UP Sep 13 01:33:38.492717 systemd-networkd[1637]: eth0: Gained carrier Sep 13 01:33:38.493071 systemd-networkd[1637]: lo: Link UP Sep 13 01:33:38.493081 systemd-networkd[1637]: lo: Gained carrier Sep 13 01:33:38.493353 systemd-networkd[1637]: eth0: Gained IPv6LL Sep 13 01:33:38.493587 systemd-networkd[1637]: Enumeration completed Sep 13 01:33:38.493711 systemd[1]: Started systemd-networkd.service. Sep 13 01:33:38.494255 systemd-networkd[1637]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:33:38.495703 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 01:33:38.500362 waagent[1579]: 2025-09-13T01:33:38.499718Z INFO Daemon Daemon Create user account if not exists Sep 13 01:33:38.506128 waagent[1579]: 2025-09-13T01:33:38.506003Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 01:33:38.512827 waagent[1579]: 2025-09-13T01:33:38.512721Z INFO Daemon Daemon Configure sudoer Sep 13 01:33:38.522233 systemd-networkd[1637]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 01:33:38.528137 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 01:33:38.529092 waagent[1579]: 2025-09-13T01:33:38.528983Z INFO Daemon Daemon Configure sshd Sep 13 01:33:38.533741 waagent[1579]: 2025-09-13T01:33:38.533642Z INFO Daemon Daemon Deploy ssh public key. Sep 13 01:33:39.706557 waagent[1579]: 2025-09-13T01:33:39.706480Z INFO Daemon Daemon Provisioning complete Sep 13 01:33:39.726185 waagent[1579]: 2025-09-13T01:33:39.726094Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 01:33:39.732738 waagent[1579]: 2025-09-13T01:33:39.732656Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 01:33:39.744210 waagent[1579]: 2025-09-13T01:33:39.744124Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Sep 13 01:33:40.065210 waagent[1643]: 2025-09-13T01:33:40.065037Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Sep 13 01:33:40.066427 waagent[1643]: 2025-09-13T01:33:40.066352Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:40.066711 waagent[1643]: 2025-09-13T01:33:40.066661Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:40.080315 waagent[1643]: 2025-09-13T01:33:40.080206Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Sep 13 01:33:40.080680 waagent[1643]: 2025-09-13T01:33:40.080629Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Sep 13 01:33:40.143041 waagent[1643]: 2025-09-13T01:33:40.142885Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08 Sep 13 01:33:40.143582 waagent[1643]: 2025-09-13T01:33:40.143521Z INFO ExtHandler ExtHandler Fetch goal state completed Sep 13 01:33:40.159395 waagent[1643]: 2025-09-13T01:33:40.159335Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 6dfc2954-1b2d-4672-af36-f18c4ec4f0de New eTag: 13035000026051838001] Sep 13 01:33:40.160236 waagent[1643]: 2025-09-13T01:33:40.160170Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Sep 13 01:33:40.264418 waagent[1643]: 2025-09-13T01:33:40.264266Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:33:40.285122 waagent[1643]: 2025-09-13T01:33:40.285018Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1643 Sep 13 01:33:40.289091 waagent[1643]: 2025-09-13T01:33:40.289010Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:33:40.290430 waagent[1643]: 2025-09-13T01:33:40.290364Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 01:33:40.420880 waagent[1643]: 2025-09-13T01:33:40.420752Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:33:40.421277 waagent[1643]: 2025-09-13T01:33:40.421217Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:33:40.429446 waagent[1643]: 2025-09-13T01:33:40.429376Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:33:40.430067 waagent[1643]: 2025-09-13T01:33:40.429999Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:33:40.431370 waagent[1643]: 2025-09-13T01:33:40.431298Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Sep 13 01:33:40.432859 waagent[1643]: 2025-09-13T01:33:40.432781Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:33:40.433574 waagent[1643]: 2025-09-13T01:33:40.433508Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:40.433853 waagent[1643]: 2025-09-13T01:33:40.433802Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:40.434574 waagent[1643]: 2025-09-13T01:33:40.434513Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:33:40.434991 waagent[1643]: 2025-09-13T01:33:40.434934Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:33:40.434991 waagent[1643]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:33:40.434991 waagent[1643]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:33:40.434991 waagent[1643]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:33:40.434991 waagent[1643]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:40.434991 waagent[1643]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:40.434991 waagent[1643]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:40.437656 waagent[1643]: 2025-09-13T01:33:40.437474Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:33:40.438388 waagent[1643]: 2025-09-13T01:33:40.438305Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:40.438926 waagent[1643]: 2025-09-13T01:33:40.438863Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:40.439649 waagent[1643]: 2025-09-13T01:33:40.439582Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:33:40.439904 waagent[1643]: 2025-09-13T01:33:40.439855Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:33:40.440127 waagent[1643]: 2025-09-13T01:33:40.440061Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:33:40.441161 waagent[1643]: 2025-09-13T01:33:40.441084Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:33:40.441256 waagent[1643]: 2025-09-13T01:33:40.441190Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:33:40.442008 waagent[1643]: 2025-09-13T01:33:40.441927Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:33:40.442115 waagent[1643]: 2025-09-13T01:33:40.442033Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:33:40.442561 waagent[1643]: 2025-09-13T01:33:40.442488Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:33:40.455223 waagent[1643]: 2025-09-13T01:33:40.455141Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Sep 13 01:33:40.455920 waagent[1643]: 2025-09-13T01:33:40.455861Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:33:40.457063 waagent[1643]: 2025-09-13T01:33:40.456997Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Sep 13 01:33:40.492907 waagent[1643]: 2025-09-13T01:33:40.492768Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1637' Sep 13 01:33:40.503728 waagent[1643]: 2025-09-13T01:33:40.503533Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Sep 13 01:33:40.584896 waagent[1643]: 2025-09-13T01:33:40.584759Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:33:40.584896 waagent[1643]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:33:40.584896 waagent[1643]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:33:40.584896 waagent[1643]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d8:a6 brd ff:ff:ff:ff:ff:ff Sep 13 01:33:40.584896 waagent[1643]: 3: enP17766s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d8:a6 brd ff:ff:ff:ff:ff:ff\ altname enP17766p0s2 Sep 13 01:33:40.584896 waagent[1643]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:33:40.584896 waagent[1643]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:33:40.584896 waagent[1643]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:33:40.584896 waagent[1643]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:33:40.584896 waagent[1643]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:33:40.584896 waagent[1643]: 2: eth0 inet6 fe80::20d:3aff:fe06:d8a6/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:33:40.932771 waagent[1643]: 2025-09-13T01:33:40.932693Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Sep 13 01:33:41.748979 waagent[1579]: 2025-09-13T01:33:41.748839Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Sep 13 01:33:41.754995 waagent[1579]: 2025-09-13T01:33:41.754923Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Sep 13 01:33:43.110751 waagent[1679]: 2025-09-13T01:33:43.110622Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Sep 13 01:33:43.111499 waagent[1679]: 2025-09-13T01:33:43.111433Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Sep 13 01:33:43.111793 waagent[1679]: 2025-09-13T01:33:43.111741Z INFO ExtHandler ExtHandler Python: 3.9.16 Sep 13 01:33:43.112044 waagent[1679]: 2025-09-13T01:33:43.111994Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 13 01:33:43.128525 waagent[1679]: 2025-09-13T01:33:43.128389Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 01:33:43.129242 waagent[1679]: 2025-09-13T01:33:43.129175Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:43.129547 waagent[1679]: 2025-09-13T01:33:43.129495Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:43.129898 waagent[1679]: 2025-09-13T01:33:43.129843Z INFO ExtHandler ExtHandler Initializing the goal state... Sep 13 01:33:43.144730 waagent[1679]: 2025-09-13T01:33:43.144618Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 01:33:43.158663 waagent[1679]: 2025-09-13T01:33:43.158591Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 01:33:43.160052 waagent[1679]: 2025-09-13T01:33:43.159985Z INFO ExtHandler Sep 13 01:33:43.160413 waagent[1679]: 2025-09-13T01:33:43.160357Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1a854c1e-a39a-4a0e-93b7-90d9c79ad05f eTag: 13035000026051838001 source: Fabric] Sep 13 01:33:43.161391 waagent[1679]: 2025-09-13T01:33:43.161329Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 01:33:43.162872 waagent[1679]: 2025-09-13T01:33:43.162806Z INFO ExtHandler Sep 13 01:33:43.163168 waagent[1679]: 2025-09-13T01:33:43.163088Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 01:33:43.172635 waagent[1679]: 2025-09-13T01:33:43.172576Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 01:33:43.173444 waagent[1679]: 2025-09-13T01:33:43.173394Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Sep 13 01:33:43.196425 waagent[1679]: 2025-09-13T01:33:43.196350Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Sep 13 01:33:43.270515 waagent[1679]: 2025-09-13T01:33:43.270368Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F5FB8A06CF54A1DD39E887D68C2D10D70DAFA08', 'hasPrivateKey': True} Sep 13 01:33:43.272348 waagent[1679]: 2025-09-13T01:33:43.272266Z INFO ExtHandler Fetch goal state from WireServer completed Sep 13 01:33:43.273573 waagent[1679]: 2025-09-13T01:33:43.273505Z INFO ExtHandler ExtHandler Goal state initialization completed. Sep 13 01:33:43.296285 waagent[1679]: 2025-09-13T01:33:43.296146Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Sep 13 01:33:43.307164 waagent[1679]: 2025-09-13T01:33:43.307007Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:33:43.312162 waagent[1679]: 2025-09-13T01:33:43.312015Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Sep 13 01:33:43.312596 waagent[1679]: 2025-09-13T01:33:43.312538Z INFO ExtHandler ExtHandler Checking state of the firewall Sep 13 01:33:43.433837 waagent[1679]: 2025-09-13T01:33:43.433628Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Sep 13 01:33:43.433837 waagent[1679]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.433837 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.433837 waagent[1679]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.433837 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.433837 waagent[1679]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.433837 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.433837 waagent[1679]: 55 7869 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:33:43.435519 waagent[1679]: 2025-09-13T01:33:43.435441Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Sep 13 01:33:43.439532 waagent[1679]: 2025-09-13T01:33:43.439388Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Sep 13 01:33:43.440015 waagent[1679]: 2025-09-13T01:33:43.439959Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 01:33:43.440632 waagent[1679]: 2025-09-13T01:33:43.440568Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 01:33:43.450005 waagent[1679]: 2025-09-13T01:33:43.449939Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 01:33:43.450905 waagent[1679]: 2025-09-13T01:33:43.450831Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Sep 13 01:33:43.460044 waagent[1679]: 2025-09-13T01:33:43.459947Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1679 Sep 13 01:33:43.463837 waagent[1679]: 2025-09-13T01:33:43.463738Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 01:33:43.465023 waagent[1679]: 2025-09-13T01:33:43.464955Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Sep 13 01:33:43.466259 waagent[1679]: 2025-09-13T01:33:43.466191Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 13 01:33:43.469333 waagent[1679]: 2025-09-13T01:33:43.469261Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Sep 13 01:33:43.469861 waagent[1679]: 2025-09-13T01:33:43.469801Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 13 01:33:43.471516 waagent[1679]: 2025-09-13T01:33:43.471443Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 01:33:43.471836 waagent[1679]: 2025-09-13T01:33:43.471761Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:43.472533 waagent[1679]: 2025-09-13T01:33:43.472475Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:43.473297 waagent[1679]: 2025-09-13T01:33:43.473227Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 01:33:43.473761 waagent[1679]: 2025-09-13T01:33:43.473632Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 01:33:43.473974 waagent[1679]: 2025-09-13T01:33:43.473903Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 01:33:43.474597 waagent[1679]: 2025-09-13T01:33:43.474523Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 01:33:43.474760 waagent[1679]: 2025-09-13T01:33:43.474681Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 01:33:43.474760 waagent[1679]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 01:33:43.474760 waagent[1679]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 01:33:43.474760 waagent[1679]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 01:33:43.474760 waagent[1679]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:43.474760 waagent[1679]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:43.474760 waagent[1679]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 01:33:43.478396 waagent[1679]: 2025-09-13T01:33:43.478234Z INFO EnvHandler ExtHandler Configure routes Sep 13 01:33:43.479028 waagent[1679]: 2025-09-13T01:33:43.478963Z INFO EnvHandler ExtHandler Gateway:None Sep 13 01:33:43.479331 waagent[1679]: 2025-09-13T01:33:43.479278Z INFO EnvHandler ExtHandler Routes:None Sep 13 01:33:43.485010 waagent[1679]: 2025-09-13T01:33:43.484925Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 01:33:43.485204 waagent[1679]: 2025-09-13T01:33:43.484306Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 01:33:43.494703 waagent[1679]: 2025-09-13T01:33:43.494526Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 01:33:43.494919 waagent[1679]: 2025-09-13T01:33:43.494823Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 01:33:43.500015 waagent[1679]: 2025-09-13T01:33:43.499834Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 01:33:43.501677 waagent[1679]: 2025-09-13T01:33:43.501580Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 01:33:43.501677 waagent[1679]: Executing ['ip', '-a', '-o', 'link']: Sep 13 01:33:43.501677 waagent[1679]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 01:33:43.501677 waagent[1679]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d8:a6 brd ff:ff:ff:ff:ff:ff Sep 13 01:33:43.501677 waagent[1679]: 3: enP17766s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:06:d8:a6 brd ff:ff:ff:ff:ff:ff\ altname enP17766p0s2 Sep 13 01:33:43.501677 waagent[1679]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 01:33:43.501677 waagent[1679]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 01:33:43.501677 waagent[1679]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 01:33:43.501677 waagent[1679]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 01:33:43.501677 waagent[1679]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Sep 13 01:33:43.501677 waagent[1679]: 2: eth0 inet6 fe80::20d:3aff:fe06:d8a6/64 scope link \ valid_lft forever preferred_lft forever Sep 13 01:33:43.503141 waagent[1679]: 2025-09-13T01:33:43.503044Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Sep 13 01:33:43.521764 waagent[1679]: 2025-09-13T01:33:43.521679Z INFO ExtHandler ExtHandler Downloading agent manifest Sep 13 01:33:43.539592 waagent[1679]: 2025-09-13T01:33:43.539495Z INFO ExtHandler ExtHandler Sep 13 01:33:43.540737 waagent[1679]: 2025-09-13T01:33:43.540656Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fe9970be-bbe1-464a-a866-74bb0af7cb73 correlation a98c7f83-1e9b-4415-bf98-3eac0184c3ea created: 2025-09-13T01:32:00.525436Z] Sep 13 01:33:43.543605 waagent[1679]: 2025-09-13T01:33:43.543519Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 01:33:43.548874 waagent[1679]: 2025-09-13T01:33:43.548785Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 9 ms] Sep 13 01:33:43.574768 waagent[1679]: 2025-09-13T01:33:43.574693Z INFO ExtHandler ExtHandler Looking for existing remote access users. Sep 13 01:33:43.577628 waagent[1679]: 2025-09-13T01:33:43.577555Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0CB6B4CF-02DA-41AA-A2F1-EA720A3C1474;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Sep 13 01:33:43.608750 waagent[1679]: 2025-09-13T01:33:43.608648Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Sep 13 01:33:43.608750 waagent[1679]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.608750 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.608750 waagent[1679]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.608750 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.608750 waagent[1679]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.608750 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.608750 waagent[1679]: 84 14355 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:33:43.676136 waagent[1679]: 2025-09-13T01:33:43.675976Z INFO EnvHandler ExtHandler The firewall was setup successfully: Sep 13 01:33:43.676136 waagent[1679]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.676136 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.676136 waagent[1679]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.676136 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.676136 waagent[1679]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 01:33:43.676136 waagent[1679]: pkts bytes target prot opt in out source destination Sep 13 01:33:43.676136 waagent[1679]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 01:33:43.676136 waagent[1679]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 01:33:43.676136 waagent[1679]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 01:33:43.679935 waagent[1679]: 2025-09-13T01:33:43.679874Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 01:33:46.201339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:33:46.201525 systemd[1]: Stopped kubelet.service. Sep 13 01:33:46.202950 systemd[1]: Starting kubelet.service... Sep 13 01:33:46.296895 systemd[1]: Started kubelet.service. Sep 13 01:33:46.411459 kubelet[1727]: E0913 01:33:46.411392 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:46.413734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:46.413860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:47.178573 systemd[1]: Created slice system-sshd.slice. Sep 13 01:33:47.180305 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:39472.service. Sep 13 01:33:47.846209 sshd[1733]: Accepted publickey for core from 10.200.16.10 port 39472 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:33:47.864172 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:47.868684 systemd[1]: Started session-3.scope. Sep 13 01:33:47.869861 systemd-logind[1462]: New session 3 of user core. Sep 13 01:33:48.218045 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:39484.service. Sep 13 01:33:48.627993 sshd[1738]: Accepted publickey for core from 10.200.16.10 port 39484 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:33:48.629193 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:48.633593 systemd[1]: Started session-4.scope. Sep 13 01:33:48.633884 systemd-logind[1462]: New session 4 of user core. Sep 13 01:33:48.954432 sshd[1738]: pam_unix(sshd:session): session closed for user core Sep 13 01:33:48.957547 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:39484.service: Deactivated successfully. Sep 13 01:33:48.957885 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:33:48.958271 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:33:48.959186 systemd-logind[1462]: Removed session 4. Sep 13 01:33:49.026334 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:39490.service. Sep 13 01:33:49.440838 sshd[1744]: Accepted publickey for core from 10.200.16.10 port 39490 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:33:49.441783 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:49.445718 systemd-logind[1462]: New session 5 of user core. Sep 13 01:33:49.446212 systemd[1]: Started session-5.scope. Sep 13 01:33:49.745736 sshd[1744]: pam_unix(sshd:session): session closed for user core Sep 13 01:33:49.748310 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:33:49.748846 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:39490.service: Deactivated successfully. Sep 13 01:33:49.749926 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:33:49.750629 systemd-logind[1462]: Removed session 5. Sep 13 01:33:49.814735 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:39500.service. Sep 13 01:33:50.228897 sshd[1750]: Accepted publickey for core from 10.200.16.10 port 39500 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:33:50.230511 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:50.234761 systemd[1]: Started session-6.scope. Sep 13 01:33:50.235946 systemd-logind[1462]: New session 6 of user core. Sep 13 01:33:50.557877 sshd[1750]: pam_unix(sshd:session): session closed for user core Sep 13 01:33:50.560824 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:33:50.561010 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:39500.service: Deactivated successfully. Sep 13 01:33:50.561715 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:33:50.562365 systemd-logind[1462]: Removed session 6. Sep 13 01:33:50.626636 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:33110.service. Sep 13 01:33:51.040590 sshd[1756]: Accepted publickey for core from 10.200.16.10 port 33110 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:33:51.042475 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:33:51.046795 systemd[1]: Started session-7.scope. Sep 13 01:33:51.048149 systemd-logind[1462]: New session 7 of user core. Sep 13 01:33:51.611011 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:33:51.611259 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 01:33:51.644891 systemd[1]: Starting docker.service... Sep 13 01:33:51.705920 env[1769]: time="2025-09-13T01:33:51.705868327Z" level=info msg="Starting up" Sep 13 01:33:51.711201 env[1769]: time="2025-09-13T01:33:51.711171043Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:33:51.711335 env[1769]: time="2025-09-13T01:33:51.711322006Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:33:51.711411 env[1769]: time="2025-09-13T01:33:51.711394488Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:33:51.711466 env[1769]: time="2025-09-13T01:33:51.711454929Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:33:51.713740 env[1769]: time="2025-09-13T01:33:51.713713019Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 01:33:51.713860 env[1769]: time="2025-09-13T01:33:51.713847422Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 01:33:51.713936 env[1769]: time="2025-09-13T01:33:51.713922103Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 01:33:51.713990 env[1769]: time="2025-09-13T01:33:51.713978385Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 01:33:51.775934 env[1769]: time="2025-09-13T01:33:51.775894263Z" level=info msg="Loading containers: start." Sep 13 01:33:52.009122 kernel: Initializing XFRM netlink socket Sep 13 01:33:52.042223 env[1769]: time="2025-09-13T01:33:52.042178692Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 01:33:52.199991 systemd-networkd[1637]: docker0: Link UP Sep 13 01:33:52.229467 env[1769]: time="2025-09-13T01:33:52.229422104Z" level=info msg="Loading containers: done." Sep 13 01:33:52.240627 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1467527709-merged.mount: Deactivated successfully. Sep 13 01:33:52.251855 env[1769]: time="2025-09-13T01:33:52.251816125Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:33:52.252206 env[1769]: time="2025-09-13T01:33:52.252188253Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 01:33:52.252390 env[1769]: time="2025-09-13T01:33:52.252375537Z" level=info msg="Daemon has completed initialization" Sep 13 01:33:52.288508 systemd[1]: Started docker.service. Sep 13 01:33:52.296260 env[1769]: time="2025-09-13T01:33:52.296198158Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:33:55.944080 env[1477]: time="2025-09-13T01:33:55.944026221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 01:33:56.451310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:33:56.451493 systemd[1]: Stopped kubelet.service. Sep 13 01:33:56.452947 systemd[1]: Starting kubelet.service... Sep 13 01:33:56.553535 systemd[1]: Started kubelet.service. Sep 13 01:33:56.655396 kubelet[1891]: E0913 01:33:56.655332 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:33:56.657611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:33:56.657739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:33:57.216730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91977465.mount: Deactivated successfully. Sep 13 01:33:59.182974 env[1477]: time="2025-09-13T01:33:59.182925949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:33:59.189584 env[1477]: time="2025-09-13T01:33:59.189530275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:33:59.195400 env[1477]: time="2025-09-13T01:33:59.195355272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:33:59.200607 env[1477]: time="2025-09-13T01:33:59.200565260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:33:59.201577 env[1477]: time="2025-09-13T01:33:59.201536273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 13 01:33:59.203021 env[1477]: time="2025-09-13T01:33:59.202985972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 01:34:00.930410 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 13 01:34:00.935862 env[1477]: time="2025-09-13T01:34:00.935815143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:00.944249 env[1477]: time="2025-09-13T01:34:00.944194006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:00.948453 env[1477]: time="2025-09-13T01:34:00.948387537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:00.952923 env[1477]: time="2025-09-13T01:34:00.952857552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:00.953957 env[1477]: time="2025-09-13T01:34:00.953919725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 13 01:34:00.954728 env[1477]: time="2025-09-13T01:34:00.954697134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 01:34:02.839447 env[1477]: time="2025-09-13T01:34:02.839394870Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:02.846720 env[1477]: time="2025-09-13T01:34:02.846679309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:02.850987 env[1477]: time="2025-09-13T01:34:02.850930794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:02.855322 env[1477]: time="2025-09-13T01:34:02.855274481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:02.856201 env[1477]: time="2025-09-13T01:34:02.856170371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 13 01:34:02.856901 env[1477]: time="2025-09-13T01:34:02.856876539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 01:34:04.025389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118101730.mount: Deactivated successfully. Sep 13 01:34:04.525151 env[1477]: time="2025-09-13T01:34:04.525088500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:04.535347 env[1477]: time="2025-09-13T01:34:04.535305917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:04.539060 env[1477]: time="2025-09-13T01:34:04.539023632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:04.542473 env[1477]: time="2025-09-13T01:34:04.542437504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:04.542783 env[1477]: time="2025-09-13T01:34:04.542748907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 13 01:34:04.543423 env[1477]: time="2025-09-13T01:34:04.543398033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:34:05.181742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687377134.mount: Deactivated successfully. Sep 13 01:34:06.701315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:34:06.701493 systemd[1]: Stopped kubelet.service. Sep 13 01:34:06.702942 systemd[1]: Starting kubelet.service... Sep 13 01:34:07.067433 systemd[1]: Started kubelet.service. Sep 13 01:34:07.149058 kubelet[1901]: E0913 01:34:07.149000 1901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:34:07.150669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:34:07.150797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:34:07.543204 env[1477]: time="2025-09-13T01:34:07.543157356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:07.551823 env[1477]: time="2025-09-13T01:34:07.551782103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:07.556932 env[1477]: time="2025-09-13T01:34:07.556895303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:07.561866 env[1477]: time="2025-09-13T01:34:07.561828382Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:07.562773 env[1477]: time="2025-09-13T01:34:07.562740109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 01:34:07.564174 env[1477]: time="2025-09-13T01:34:07.564146880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:34:08.201668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063889971.mount: Deactivated successfully. Sep 13 01:34:08.222838 env[1477]: time="2025-09-13T01:34:08.222793642Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:08.229468 env[1477]: time="2025-09-13T01:34:08.229429211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:08.233636 env[1477]: time="2025-09-13T01:34:08.233599881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:08.237183 env[1477]: time="2025-09-13T01:34:08.237148067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:08.237668 env[1477]: time="2025-09-13T01:34:08.237641511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 01:34:08.238166 env[1477]: time="2025-09-13T01:34:08.238144234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 01:34:08.881537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664322636.mount: Deactivated successfully. Sep 13 01:34:09.405525 update_engine[1463]: I0913 01:34:09.405155 1463 update_attempter.cc:509] Updating boot flags... Sep 13 01:34:12.183423 env[1477]: time="2025-09-13T01:34:12.183369452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:12.191339 env[1477]: time="2025-09-13T01:34:12.191299377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:12.195699 env[1477]: time="2025-09-13T01:34:12.195646282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:12.200257 env[1477]: time="2025-09-13T01:34:12.200218508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:12.201070 env[1477]: time="2025-09-13T01:34:12.201040552Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 13 01:34:17.201301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 01:34:17.201489 systemd[1]: Stopped kubelet.service. Sep 13 01:34:17.202871 systemd[1]: Starting kubelet.service... Sep 13 01:34:17.347798 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 01:34:17.347879 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 01:34:17.348128 systemd[1]: Stopped kubelet.service. Sep 13 01:34:17.350423 systemd[1]: Starting kubelet.service... Sep 13 01:34:17.375515 systemd[1]: Reloading. Sep 13 01:34:17.441831 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2025-09-13T01:34:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:34:17.441862 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2025-09-13T01:34:17Z" level=info msg="torcx already run" Sep 13 01:34:17.528857 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:34:17.528879 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:34:17.544855 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:34:17.991425 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 01:34:17.991662 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 01:34:17.991979 systemd[1]: Stopped kubelet.service. Sep 13 01:34:17.995594 systemd[1]: Starting kubelet.service... Sep 13 01:34:18.859726 systemd[1]: Started kubelet.service. Sep 13 01:34:18.915887 kubelet[2057]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:18.915887 kubelet[2057]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:34:18.915887 kubelet[2057]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:18.916290 kubelet[2057]: I0913 01:34:18.915940 2057 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:34:19.558021 kubelet[2057]: I0913 01:34:19.557979 2057 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 01:34:19.558587 kubelet[2057]: I0913 01:34:19.558559 2057 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:34:19.559084 kubelet[2057]: I0913 01:34:19.559063 2057 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 01:34:19.592048 kubelet[2057]: E0913 01:34:19.592004 2057 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:19.593413 kubelet[2057]: I0913 01:34:19.593389 2057 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:34:19.600207 kubelet[2057]: E0913 01:34:19.600170 2057 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:34:19.600383 kubelet[2057]: I0913 01:34:19.600368 2057 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:34:19.603357 kubelet[2057]: I0913 01:34:19.603329 2057 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:34:19.604723 kubelet[2057]: I0913 01:34:19.604681 2057 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:34:19.605025 kubelet[2057]: I0913 01:34:19.604842 2057 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-8e33b0f951","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:34:19.605193 kubelet[2057]: I0913 01:34:19.605180 2057 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:34:19.605255 kubelet[2057]: I0913 01:34:19.605247 2057 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 01:34:19.605431 kubelet[2057]: I0913 01:34:19.605420 2057 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:19.612593 kubelet[2057]: I0913 01:34:19.612558 2057 kubelet.go:446] "Attempting to sync node with API server" Sep 13 01:34:19.612767 kubelet[2057]: I0913 01:34:19.612755 2057 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:34:19.612861 kubelet[2057]: I0913 01:34:19.612851 2057 kubelet.go:352] "Adding apiserver pod source" Sep 13 01:34:19.612924 kubelet[2057]: I0913 01:34:19.612916 2057 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:34:19.617032 kubelet[2057]: I0913 01:34:19.617000 2057 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:34:19.617527 kubelet[2057]: I0913 01:34:19.617498 2057 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:34:19.617583 kubelet[2057]: W0913 01:34:19.617558 2057 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:34:19.618130 kubelet[2057]: I0913 01:34:19.618083 2057 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:34:19.618192 kubelet[2057]: I0913 01:34:19.618151 2057 server.go:1287] "Started kubelet" Sep 13 01:34:19.618334 kubelet[2057]: W0913 01:34:19.618286 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8e33b0f951&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:19.618375 kubelet[2057]: E0913 01:34:19.618340 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8e33b0f951&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:19.627224 kubelet[2057]: W0913 01:34:19.627180 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:19.627424 kubelet[2057]: E0913 01:34:19.627395 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:19.627645 kubelet[2057]: E0913 01:34:19.627535 2057 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-8e33b0f951.1864b3a1398f02b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-8e33b0f951,UID:ci-3510.3.8-n-8e33b0f951,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-8e33b0f951,},FirstTimestamp:2025-09-13 01:34:19.618116276 +0000 UTC m=+0.752194303,LastTimestamp:2025-09-13 01:34:19.618116276 +0000 UTC m=+0.752194303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-8e33b0f951,}" Sep 13 01:34:19.628549 kubelet[2057]: I0913 01:34:19.628517 2057 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:34:19.630127 kubelet[2057]: I0913 01:34:19.630074 2057 server.go:479] "Adding debug handlers to kubelet server" Sep 13 01:34:19.637951 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 01:34:19.638172 kubelet[2057]: I0913 01:34:19.638141 2057 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:34:19.638613 kubelet[2057]: I0913 01:34:19.638547 2057 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:34:19.638926 kubelet[2057]: I0913 01:34:19.638907 2057 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:34:19.641091 kubelet[2057]: I0913 01:34:19.641053 2057 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:34:19.642333 kubelet[2057]: I0913 01:34:19.642302 2057 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:34:19.642611 kubelet[2057]: E0913 01:34:19.642576 2057 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" Sep 13 01:34:19.644066 kubelet[2057]: I0913 01:34:19.644016 2057 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:34:19.644172 kubelet[2057]: I0913 01:34:19.644116 2057 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:34:19.645390 kubelet[2057]: W0913 01:34:19.645327 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:19.645478 kubelet[2057]: E0913 01:34:19.645391 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:19.645478 kubelet[2057]: E0913 01:34:19.645456 2057 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8e33b0f951?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Sep 13 01:34:19.645728 kubelet[2057]: I0913 01:34:19.645697 2057 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:34:19.645805 kubelet[2057]: I0913 01:34:19.645783 2057 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:34:19.648593 kubelet[2057]: I0913 01:34:19.648562 2057 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:34:19.660583 kubelet[2057]: E0913 01:34:19.660538 2057 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:34:19.698109 kubelet[2057]: I0913 01:34:19.698053 2057 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:34:19.698277 kubelet[2057]: I0913 01:34:19.698265 2057 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:34:19.698355 kubelet[2057]: I0913 01:34:19.698347 2057 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:19.704563 kubelet[2057]: I0913 01:34:19.704535 2057 policy_none.go:49] "None policy: Start" Sep 13 01:34:19.704728 kubelet[2057]: I0913 01:34:19.704715 2057 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:34:19.704794 kubelet[2057]: I0913 01:34:19.704785 2057 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:34:19.712701 kubelet[2057]: I0913 01:34:19.712595 2057 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:34:19.713591 kubelet[2057]: I0913 01:34:19.713557 2057 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:34:19.713591 kubelet[2057]: I0913 01:34:19.713587 2057 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 01:34:19.713705 kubelet[2057]: I0913 01:34:19.713606 2057 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:34:19.713705 kubelet[2057]: I0913 01:34:19.713613 2057 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 01:34:19.713705 kubelet[2057]: E0913 01:34:19.713660 2057 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:34:19.715348 systemd[1]: Created slice kubepods.slice. Sep 13 01:34:19.721146 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 01:34:19.724243 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 01:34:19.726528 kubelet[2057]: W0913 01:34:19.726493 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:19.726711 kubelet[2057]: E0913 01:34:19.726692 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:19.730017 kubelet[2057]: I0913 01:34:19.729993 2057 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:34:19.731939 kubelet[2057]: I0913 01:34:19.731918 2057 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:34:19.732647 kubelet[2057]: I0913 01:34:19.732605 2057 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:34:19.733176 kubelet[2057]: I0913 01:34:19.732964 2057 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:34:19.733495 kubelet[2057]: E0913 01:34:19.733466 2057 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:34:19.733578 kubelet[2057]: E0913 01:34:19.733521 2057 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-8e33b0f951\" not found" Sep 13 01:34:19.822610 systemd[1]: Created slice kubepods-burstable-pod0bc5f33108857c56f1d5938fd980b9b9.slice. Sep 13 01:34:19.832415 kubelet[2057]: E0913 01:34:19.832379 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.835369 systemd[1]: Created slice kubepods-burstable-pode3fb06e579d51a69f9f3b1f965bf9991.slice. Sep 13 01:34:19.838367 kubelet[2057]: I0913 01:34:19.837938 2057 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.838725 kubelet[2057]: E0913 01:34:19.838537 2057 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.838957 kubelet[2057]: E0913 01:34:19.838932 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.845975 kubelet[2057]: E0913 01:34:19.845942 2057 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8e33b0f951?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Sep 13 01:34:19.848394 systemd[1]: Created slice kubepods-burstable-pod040834786e9210ace075bf90656b58cb.slice. Sep 13 01:34:19.850802 kubelet[2057]: E0913 01:34:19.850770 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944545 kubelet[2057]: I0913 01:34:19.944506 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3fb06e579d51a69f9f3b1f965bf9991-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" (UID: \"e3fb06e579d51a69f9f3b1f965bf9991\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944545 kubelet[2057]: I0913 01:34:19.944553 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944917 kubelet[2057]: I0913 01:34:19.944578 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944917 kubelet[2057]: I0913 01:34:19.944593 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3fb06e579d51a69f9f3b1f965bf9991-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" (UID: \"e3fb06e579d51a69f9f3b1f965bf9991\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944917 kubelet[2057]: I0913 01:34:19.944620 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3fb06e579d51a69f9f3b1f965bf9991-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" (UID: \"e3fb06e579d51a69f9f3b1f965bf9991\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944917 kubelet[2057]: I0913 01:34:19.944638 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.944917 kubelet[2057]: I0913 01:34:19.944652 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.945036 kubelet[2057]: I0913 01:34:19.944668 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:19.945036 kubelet[2057]: I0913 01:34:19.944697 2057 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/040834786e9210ace075bf90656b58cb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-8e33b0f951\" (UID: \"040834786e9210ace075bf90656b58cb\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:20.041128 kubelet[2057]: I0913 01:34:20.041083 2057 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:20.041494 kubelet[2057]: E0913 01:34:20.041466 2057 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:20.134457 env[1477]: time="2025-09-13T01:34:20.134354350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-8e33b0f951,Uid:0bc5f33108857c56f1d5938fd980b9b9,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:20.140382 env[1477]: time="2025-09-13T01:34:20.140344090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-8e33b0f951,Uid:e3fb06e579d51a69f9f3b1f965bf9991,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:20.152125 env[1477]: time="2025-09-13T01:34:20.152059409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-8e33b0f951,Uid:040834786e9210ace075bf90656b58cb,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:20.247829 kubelet[2057]: E0913 01:34:20.247771 2057 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8e33b0f951?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Sep 13 01:34:20.444508 kubelet[2057]: I0913 01:34:20.444008 2057 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:20.444508 kubelet[2057]: E0913 01:34:20.444350 2057 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:20.498200 kubelet[2057]: W0913 01:34:20.498162 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:20.498325 kubelet[2057]: E0913 01:34:20.498211 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:20.521838 kubelet[2057]: W0913 01:34:20.521780 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:20.521917 kubelet[2057]: E0913 01:34:20.521848 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:20.574924 kubelet[2057]: W0913 01:34:20.574863 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8e33b0f951&limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:20.575139 kubelet[2057]: E0913 01:34:20.574931 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-8e33b0f951&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:20.657893 kubelet[2057]: W0913 01:34:20.657817 2057 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.18:6443: connect: connection refused Sep 13 01:34:20.657893 kubelet[2057]: E0913 01:34:20.657862 2057 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:34:20.771705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395033773.mount: Deactivated successfully. Sep 13 01:34:20.799322 env[1477]: time="2025-09-13T01:34:20.799275275Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.802352 env[1477]: time="2025-09-13T01:34:20.802314565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.813881 env[1477]: time="2025-09-13T01:34:20.813830524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.817389 env[1477]: time="2025-09-13T01:34:20.817348616Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.820992 env[1477]: time="2025-09-13T01:34:20.820955108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.826750 env[1477]: time="2025-09-13T01:34:20.826699647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.835249 env[1477]: time="2025-09-13T01:34:20.835202596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.838786 env[1477]: time="2025-09-13T01:34:20.838738048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.842142 env[1477]: time="2025-09-13T01:34:20.842093099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.848123 env[1477]: time="2025-09-13T01:34:20.848074440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.855465 env[1477]: time="2025-09-13T01:34:20.855420824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.862687 env[1477]: time="2025-09-13T01:34:20.862644009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:20.918388 env[1477]: time="2025-09-13T01:34:20.918206636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:20.918388 env[1477]: time="2025-09-13T01:34:20.918247317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:20.918388 env[1477]: time="2025-09-13T01:34:20.918257637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:20.918617 env[1477]: time="2025-09-13T01:34:20.918413357Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac9aa85a2b04dec7635171839778846654f65721bb11b468c185605b7e6f4466 pid=2095 runtime=io.containerd.runc.v2 Sep 13 01:34:20.934319 env[1477]: time="2025-09-13T01:34:20.934259651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:20.934517 env[1477]: time="2025-09-13T01:34:20.934493891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:20.934598 env[1477]: time="2025-09-13T01:34:20.934577972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:20.935423 env[1477]: time="2025-09-13T01:34:20.935374054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcb9a929c86d75b809c56071e428ec634e23e2b9be4ae4af6fe103aec99d5ba9 pid=2118 runtime=io.containerd.runc.v2 Sep 13 01:34:20.938654 systemd[1]: Started cri-containerd-ac9aa85a2b04dec7635171839778846654f65721bb11b468c185605b7e6f4466.scope. Sep 13 01:34:20.959735 env[1477]: time="2025-09-13T01:34:20.959530176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:20.959735 env[1477]: time="2025-09-13T01:34:20.959573536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:20.959735 env[1477]: time="2025-09-13T01:34:20.959585056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:20.960348 env[1477]: time="2025-09-13T01:34:20.960246018Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89daee9065a2ff16a3dc28fac0ac0490c53da4c51c0ed8a1178598748a03e83c pid=2141 runtime=io.containerd.runc.v2 Sep 13 01:34:20.972975 systemd[1]: Started cri-containerd-dcb9a929c86d75b809c56071e428ec634e23e2b9be4ae4af6fe103aec99d5ba9.scope. Sep 13 01:34:20.995572 systemd[1]: Started cri-containerd-89daee9065a2ff16a3dc28fac0ac0490c53da4c51c0ed8a1178598748a03e83c.scope. Sep 13 01:34:21.000571 env[1477]: time="2025-09-13T01:34:21.000531354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-8e33b0f951,Uid:e3fb06e579d51a69f9f3b1f965bf9991,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac9aa85a2b04dec7635171839778846654f65721bb11b468c185605b7e6f4466\"" Sep 13 01:34:21.009087 env[1477]: time="2025-09-13T01:34:21.009045662Z" level=info msg="CreateContainer within sandbox \"ac9aa85a2b04dec7635171839778846654f65721bb11b468c185605b7e6f4466\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:34:21.030741 env[1477]: time="2025-09-13T01:34:21.028745764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-8e33b0f951,Uid:040834786e9210ace075bf90656b58cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcb9a929c86d75b809c56071e428ec634e23e2b9be4ae4af6fe103aec99d5ba9\"" Sep 13 01:34:21.031671 env[1477]: time="2025-09-13T01:34:21.031635413Z" level=info msg="CreateContainer within sandbox \"dcb9a929c86d75b809c56071e428ec634e23e2b9be4ae4af6fe103aec99d5ba9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:34:21.045997 env[1477]: time="2025-09-13T01:34:21.045935698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-8e33b0f951,Uid:0bc5f33108857c56f1d5938fd980b9b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"89daee9065a2ff16a3dc28fac0ac0490c53da4c51c0ed8a1178598748a03e83c\"" Sep 13 01:34:21.048618 kubelet[2057]: E0913 01:34:21.048579 2057 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-8e33b0f951?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Sep 13 01:34:21.048913 env[1477]: time="2025-09-13T01:34:21.048666267Z" level=info msg="CreateContainer within sandbox \"89daee9065a2ff16a3dc28fac0ac0490c53da4c51c0ed8a1178598748a03e83c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:34:21.060699 env[1477]: time="2025-09-13T01:34:21.060643305Z" level=info msg="CreateContainer within sandbox \"ac9aa85a2b04dec7635171839778846654f65721bb11b468c185605b7e6f4466\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce7bd4a899501a5f2726b4e2c79f02622b42c309a94ec55af684ecdf811bdf0f\"" Sep 13 01:34:21.061622 env[1477]: time="2025-09-13T01:34:21.061592348Z" level=info msg="StartContainer for \"ce7bd4a899501a5f2726b4e2c79f02622b42c309a94ec55af684ecdf811bdf0f\"" Sep 13 01:34:21.078951 systemd[1]: Started cri-containerd-ce7bd4a899501a5f2726b4e2c79f02622b42c309a94ec55af684ecdf811bdf0f.scope. Sep 13 01:34:21.084127 env[1477]: time="2025-09-13T01:34:21.082832615Z" level=info msg="CreateContainer within sandbox \"dcb9a929c86d75b809c56071e428ec634e23e2b9be4ae4af6fe103aec99d5ba9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"011db0210658874c8b5e04b2f9024c1d9bd23bed5130cf0962e589b3e148974d\"" Sep 13 01:34:21.085910 env[1477]: time="2025-09-13T01:34:21.085861505Z" level=info msg="StartContainer for \"011db0210658874c8b5e04b2f9024c1d9bd23bed5130cf0962e589b3e148974d\"" Sep 13 01:34:21.121264 systemd[1]: Started cri-containerd-011db0210658874c8b5e04b2f9024c1d9bd23bed5130cf0962e589b3e148974d.scope. Sep 13 01:34:21.124753 env[1477]: time="2025-09-13T01:34:21.124700548Z" level=info msg="CreateContainer within sandbox \"89daee9065a2ff16a3dc28fac0ac0490c53da4c51c0ed8a1178598748a03e83c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"690259a37f252cff6a32b6ef6e65b4ca76318e6aa3fdaceec80bc06f59c54ef1\"" Sep 13 01:34:21.125300 env[1477]: time="2025-09-13T01:34:21.125274190Z" level=info msg="StartContainer for \"690259a37f252cff6a32b6ef6e65b4ca76318e6aa3fdaceec80bc06f59c54ef1\"" Sep 13 01:34:21.137667 env[1477]: time="2025-09-13T01:34:21.137621389Z" level=info msg="StartContainer for \"ce7bd4a899501a5f2726b4e2c79f02622b42c309a94ec55af684ecdf811bdf0f\" returns successfully" Sep 13 01:34:21.153147 systemd[1]: Started cri-containerd-690259a37f252cff6a32b6ef6e65b4ca76318e6aa3fdaceec80bc06f59c54ef1.scope. Sep 13 01:34:21.178708 env[1477]: time="2025-09-13T01:34:21.178659599Z" level=info msg="StartContainer for \"011db0210658874c8b5e04b2f9024c1d9bd23bed5130cf0962e589b3e148974d\" returns successfully" Sep 13 01:34:21.213941 env[1477]: time="2025-09-13T01:34:21.213893350Z" level=info msg="StartContainer for \"690259a37f252cff6a32b6ef6e65b4ca76318e6aa3fdaceec80bc06f59c54ef1\" returns successfully" Sep 13 01:34:21.246530 kubelet[2057]: I0913 01:34:21.246204 2057 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:21.724560 kubelet[2057]: E0913 01:34:21.724525 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:21.729566 kubelet[2057]: E0913 01:34:21.729526 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:21.732151 kubelet[2057]: E0913 01:34:21.732127 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:22.734667 kubelet[2057]: E0913 01:34:22.734620 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:22.735335 kubelet[2057]: E0913 01:34:22.735249 2057 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.466542 kubelet[2057]: I0913 01:34:23.466509 2057 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.543812 kubelet[2057]: I0913 01:34:23.543780 2057 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.562354 kubelet[2057]: E0913 01:34:23.562319 2057 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.562542 kubelet[2057]: I0913 01:34:23.562528 2057 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.582587 kubelet[2057]: E0913 01:34:23.582554 2057 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 13 01:34:23.582879 kubelet[2057]: E0913 01:34:23.582853 2057 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.582938 kubelet[2057]: I0913 01:34:23.582882 2057 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.588681 kubelet[2057]: E0913 01:34:23.588649 2057 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-8e33b0f951\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.630513 kubelet[2057]: I0913 01:34:23.630482 2057 apiserver.go:52] "Watching apiserver" Sep 13 01:34:23.644146 kubelet[2057]: I0913 01:34:23.644116 2057 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:34:23.734628 kubelet[2057]: I0913 01:34:23.734532 2057 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:23.737029 kubelet[2057]: E0913 01:34:23.736981 2057 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:25.814043 systemd[1]: Reloading. Sep 13 01:34:25.908557 /usr/lib/systemd/system-generators/torcx-generator[2338]: time="2025-09-13T01:34:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 01:34:25.909043 /usr/lib/systemd/system-generators/torcx-generator[2338]: time="2025-09-13T01:34:25Z" level=info msg="torcx already run" Sep 13 01:34:26.005788 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 01:34:26.005809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 01:34:26.021795 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:34:26.148607 systemd[1]: Stopping kubelet.service... Sep 13 01:34:26.166515 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:34:26.166730 systemd[1]: Stopped kubelet.service. Sep 13 01:34:26.166788 systemd[1]: kubelet.service: Consumed 1.091s CPU time. Sep 13 01:34:26.168762 systemd[1]: Starting kubelet.service... Sep 13 01:34:26.383834 systemd[1]: Started kubelet.service. Sep 13 01:34:26.458355 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:26.458355 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:34:26.458355 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:34:26.458755 kubelet[2401]: I0913 01:34:26.458411 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:34:26.465885 kubelet[2401]: I0913 01:34:26.465838 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 01:34:26.466061 kubelet[2401]: I0913 01:34:26.466050 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:34:26.466799 kubelet[2401]: I0913 01:34:26.466772 2401 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 01:34:26.468057 kubelet[2401]: I0913 01:34:26.468035 2401 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:34:26.470372 kubelet[2401]: I0913 01:34:26.470349 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:34:26.473773 kubelet[2401]: E0913 01:34:26.473728 2401 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:34:26.473960 kubelet[2401]: I0913 01:34:26.473947 2401 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:34:26.476991 kubelet[2401]: I0913 01:34:26.476966 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:34:26.477440 kubelet[2401]: I0913 01:34:26.477409 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:34:26.477709 kubelet[2401]: I0913 01:34:26.477519 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-8e33b0f951","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:34:26.477850 kubelet[2401]: I0913 01:34:26.477838 2401 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:34:26.477909 kubelet[2401]: I0913 01:34:26.477901 2401 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 01:34:26.478003 kubelet[2401]: I0913 01:34:26.477993 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:26.478195 kubelet[2401]: I0913 01:34:26.478184 2401 kubelet.go:446] "Attempting to sync node with API server" Sep 13 01:34:26.478273 kubelet[2401]: I0913 01:34:26.478263 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:34:26.478339 kubelet[2401]: I0913 01:34:26.478330 2401 kubelet.go:352] "Adding apiserver pod source" Sep 13 01:34:26.478400 kubelet[2401]: I0913 01:34:26.478391 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:34:26.484135 kubelet[2401]: I0913 01:34:26.484110 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 01:34:26.485056 kubelet[2401]: I0913 01:34:26.485028 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:34:26.485892 kubelet[2401]: I0913 01:34:26.485856 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:34:26.485980 kubelet[2401]: I0913 01:34:26.485900 2401 server.go:1287] "Started kubelet" Sep 13 01:34:26.487335 kubelet[2401]: I0913 01:34:26.487299 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:34:26.488831 kubelet[2401]: I0913 01:34:26.488806 2401 server.go:479] "Adding debug handlers to kubelet server" Sep 13 01:34:26.489188 kubelet[2401]: I0913 01:34:26.489132 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:34:26.489430 kubelet[2401]: I0913 01:34:26.489403 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:34:26.492701 kubelet[2401]: I0913 01:34:26.492675 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:34:26.500115 kubelet[2401]: I0913 01:34:26.500067 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:34:26.501450 kubelet[2401]: I0913 01:34:26.501428 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:34:26.501803 kubelet[2401]: E0913 01:34:26.501783 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-8e33b0f951\" not found" Sep 13 01:34:26.502494 kubelet[2401]: I0913 01:34:26.502476 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:34:26.502691 kubelet[2401]: I0913 01:34:26.502681 2401 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:34:26.515347 kubelet[2401]: I0913 01:34:26.515305 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:34:26.516398 kubelet[2401]: I0913 01:34:26.516367 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:34:26.516535 kubelet[2401]: I0913 01:34:26.516520 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 01:34:26.516624 kubelet[2401]: I0913 01:34:26.516613 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:34:26.516684 kubelet[2401]: I0913 01:34:26.516672 2401 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 01:34:26.516790 kubelet[2401]: E0913 01:34:26.516770 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:34:26.525817 kubelet[2401]: I0913 01:34:26.525788 2401 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:34:26.526632 kubelet[2401]: I0913 01:34:26.526605 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:34:26.536585 kubelet[2401]: E0913 01:34:26.536537 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:34:26.538304 kubelet[2401]: I0913 01:34:26.538269 2401 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:34:26.577080 kubelet[2401]: I0913 01:34:26.577051 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:34:26.577080 kubelet[2401]: I0913 01:34:26.577072 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:34:26.577276 kubelet[2401]: I0913 01:34:26.577094 2401 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:34:26.577401 kubelet[2401]: I0913 01:34:26.577378 2401 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:34:26.577431 kubelet[2401]: I0913 01:34:26.577397 2401 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:34:26.577431 kubelet[2401]: I0913 01:34:26.577415 2401 policy_none.go:49] "None policy: Start" Sep 13 01:34:26.577431 kubelet[2401]: I0913 01:34:26.577424 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:34:26.577502 kubelet[2401]: I0913 01:34:26.577433 2401 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:34:26.577598 kubelet[2401]: I0913 01:34:26.577582 2401 state_mem.go:75] "Updated machine memory state" Sep 13 01:34:26.581318 kubelet[2401]: I0913 01:34:26.581292 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:34:26.581744 kubelet[2401]: I0913 01:34:26.581730 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:34:26.581921 kubelet[2401]: I0913 01:34:26.581884 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:34:26.582770 kubelet[2401]: I0913 01:34:26.582587 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:34:26.585557 kubelet[2401]: E0913 01:34:26.585536 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:34:26.618014 kubelet[2401]: I0913 01:34:26.617980 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.618618 kubelet[2401]: I0913 01:34:26.618597 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.619166 kubelet[2401]: I0913 01:34:26.619149 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.628309 kubelet[2401]: W0913 01:34:26.628246 2401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:26.632187 kubelet[2401]: W0913 01:34:26.632152 2401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:26.632364 kubelet[2401]: W0913 01:34:26.632152 2401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:26.684833 kubelet[2401]: I0913 01:34:26.684798 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.697243 kubelet[2401]: I0913 01:34:26.697213 2401 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.697524 kubelet[2401]: I0913 01:34:26.697512 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.804392 kubelet[2401]: I0913 01:34:26.804291 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.804575 kubelet[2401]: I0913 01:34:26.804556 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.804686 kubelet[2401]: I0913 01:34:26.804670 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3fb06e579d51a69f9f3b1f965bf9991-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" (UID: \"e3fb06e579d51a69f9f3b1f965bf9991\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.804787 kubelet[2401]: I0913 01:34:26.804772 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3fb06e579d51a69f9f3b1f965bf9991-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" (UID: \"e3fb06e579d51a69f9f3b1f965bf9991\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.804882 kubelet[2401]: I0913 01:34:26.804869 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.805014 kubelet[2401]: I0913 01:34:26.804980 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.805063 kubelet[2401]: I0913 01:34:26.805022 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3fb06e579d51a69f9f3b1f965bf9991-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" (UID: \"e3fb06e579d51a69f9f3b1f965bf9991\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.805063 kubelet[2401]: I0913 01:34:26.805044 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0bc5f33108857c56f1d5938fd980b9b9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-8e33b0f951\" (UID: \"0bc5f33108857c56f1d5938fd980b9b9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.805148 kubelet[2401]: I0913 01:34:26.805065 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/040834786e9210ace075bf90656b58cb-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-8e33b0f951\" (UID: \"040834786e9210ace075bf90656b58cb\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:26.887740 sudo[2432]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:34:26.889288 sudo[2432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 01:34:27.377863 sudo[2432]: pam_unix(sudo:session): session closed for user root Sep 13 01:34:27.483587 kubelet[2401]: I0913 01:34:27.483541 2401 apiserver.go:52] "Watching apiserver" Sep 13 01:34:27.503713 kubelet[2401]: I0913 01:34:27.503676 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:34:27.560625 kubelet[2401]: I0913 01:34:27.560594 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:27.571551 kubelet[2401]: W0913 01:34:27.571517 2401 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:34:27.571724 kubelet[2401]: E0913 01:34:27.571580 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-8e33b0f951\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" Sep 13 01:34:27.584596 kubelet[2401]: I0913 01:34:27.584533 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-8e33b0f951" podStartSLOduration=1.584490393 podStartE2EDuration="1.584490393s" podCreationTimestamp="2025-09-13 01:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:27.583725032 +0000 UTC m=+1.191119449" watchObservedRunningTime="2025-09-13 01:34:27.584490393 +0000 UTC m=+1.191884810" Sep 13 01:34:27.609332 kubelet[2401]: I0913 01:34:27.609272 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-8e33b0f951" podStartSLOduration=1.609257487 podStartE2EDuration="1.609257487s" podCreationTimestamp="2025-09-13 01:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:27.59683138 +0000 UTC m=+1.204225797" watchObservedRunningTime="2025-09-13 01:34:27.609257487 +0000 UTC m=+1.216651904" Sep 13 01:34:27.628612 kubelet[2401]: I0913 01:34:27.628475 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-8e33b0f951" podStartSLOduration=1.628455888 podStartE2EDuration="1.628455888s" podCreationTimestamp="2025-09-13 01:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:27.609813608 +0000 UTC m=+1.217208025" watchObservedRunningTime="2025-09-13 01:34:27.628455888 +0000 UTC m=+1.235850305" Sep 13 01:34:29.943689 sudo[1759]: pam_unix(sudo:session): session closed for user root Sep 13 01:34:30.021871 sshd[1756]: pam_unix(sshd:session): session closed for user core Sep 13 01:34:30.024979 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:34:30.025162 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:34:30.025339 systemd[1]: session-7.scope: Consumed 7.294s CPU time. Sep 13 01:34:30.026256 systemd-logind[1462]: Removed session 7. Sep 13 01:34:30.026496 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:33110.service: Deactivated successfully. Sep 13 01:34:31.442563 kubelet[2401]: I0913 01:34:31.442527 2401 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:34:31.442920 env[1477]: time="2025-09-13T01:34:31.442837698Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:34:31.443145 kubelet[2401]: I0913 01:34:31.442999 2401 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:34:32.381078 systemd[1]: Created slice kubepods-besteffort-pod008e7945_3a7f_44ae_a922_eae810b09261.slice. Sep 13 01:34:32.412589 systemd[1]: Created slice kubepods-burstable-pod52a91b44_6450_4141_9606_8bc18f1baad6.slice. Sep 13 01:34:32.439724 kubelet[2401]: I0913 01:34:32.439678 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/008e7945-3a7f-44ae-a922-eae810b09261-lib-modules\") pod \"kube-proxy-4j2sv\" (UID: \"008e7945-3a7f-44ae-a922-eae810b09261\") " pod="kube-system/kube-proxy-4j2sv" Sep 13 01:34:32.439724 kubelet[2401]: I0913 01:34:32.439717 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-hubble-tls\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.439916 kubelet[2401]: I0913 01:34:32.439749 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/008e7945-3a7f-44ae-a922-eae810b09261-kube-proxy\") pod \"kube-proxy-4j2sv\" (UID: \"008e7945-3a7f-44ae-a922-eae810b09261\") " pod="kube-system/kube-proxy-4j2sv" Sep 13 01:34:32.439916 kubelet[2401]: I0913 01:34:32.439768 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cni-path\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.439916 kubelet[2401]: I0913 01:34:32.439786 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-etc-cni-netd\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.439916 kubelet[2401]: I0913 01:34:32.439803 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-run\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.439916 kubelet[2401]: I0913 01:34:32.439828 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-cgroup\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.439916 kubelet[2401]: I0913 01:34:32.439844 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-net\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440053 kubelet[2401]: I0913 01:34:32.439859 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-kernel\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440053 kubelet[2401]: I0913 01:34:32.439876 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5l65\" (UniqueName: \"kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-kube-api-access-v5l65\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440053 kubelet[2401]: I0913 01:34:32.439902 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-bpf-maps\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440053 kubelet[2401]: I0913 01:34:32.439918 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-hostproc\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440053 kubelet[2401]: I0913 01:34:32.439933 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-lib-modules\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440053 kubelet[2401]: I0913 01:34:32.439948 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a91b44-6450-4141-9606-8bc18f1baad6-clustermesh-secrets\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440236 kubelet[2401]: I0913 01:34:32.439964 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2sf\" (UniqueName: \"kubernetes.io/projected/008e7945-3a7f-44ae-a922-eae810b09261-kube-api-access-7r2sf\") pod \"kube-proxy-4j2sv\" (UID: \"008e7945-3a7f-44ae-a922-eae810b09261\") " pod="kube-system/kube-proxy-4j2sv" Sep 13 01:34:32.440236 kubelet[2401]: I0913 01:34:32.439991 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-config-path\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.440236 kubelet[2401]: I0913 01:34:32.440008 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/008e7945-3a7f-44ae-a922-eae810b09261-xtables-lock\") pod \"kube-proxy-4j2sv\" (UID: \"008e7945-3a7f-44ae-a922-eae810b09261\") " pod="kube-system/kube-proxy-4j2sv" Sep 13 01:34:32.440236 kubelet[2401]: I0913 01:34:32.440025 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-xtables-lock\") pod \"cilium-swhrp\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " pod="kube-system/cilium-swhrp" Sep 13 01:34:32.520860 systemd[1]: Created slice kubepods-besteffort-pod96864318_acd9_4ff0_955f_ec3fddddff45.slice. Sep 13 01:34:32.540385 kubelet[2401]: I0913 01:34:32.540343 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96864318-acd9-4ff0-955f-ec3fddddff45-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cs8dn\" (UID: \"96864318-acd9-4ff0-955f-ec3fddddff45\") " pod="kube-system/cilium-operator-6c4d7847fc-cs8dn" Sep 13 01:34:32.545280 kubelet[2401]: I0913 01:34:32.545237 2401 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 01:34:32.545597 kubelet[2401]: I0913 01:34:32.545579 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcgm\" (UniqueName: \"kubernetes.io/projected/96864318-acd9-4ff0-955f-ec3fddddff45-kube-api-access-bkcgm\") pod \"cilium-operator-6c4d7847fc-cs8dn\" (UID: \"96864318-acd9-4ff0-955f-ec3fddddff45\") " pod="kube-system/cilium-operator-6c4d7847fc-cs8dn" Sep 13 01:34:32.690451 env[1477]: time="2025-09-13T01:34:32.690334737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4j2sv,Uid:008e7945-3a7f-44ae-a922-eae810b09261,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:32.716291 env[1477]: time="2025-09-13T01:34:32.716233898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swhrp,Uid:52a91b44-6450-4141-9606-8bc18f1baad6,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:32.727557 env[1477]: time="2025-09-13T01:34:32.727472155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:32.727557 env[1477]: time="2025-09-13T01:34:32.727517435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:32.727557 env[1477]: time="2025-09-13T01:34:32.727528715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:32.727933 env[1477]: time="2025-09-13T01:34:32.727890636Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/733cc7e58bf6c5d2747856de2e73a2ea56fdf5d0744ce390b56658edef7d3bba pid=2483 runtime=io.containerd.runc.v2 Sep 13 01:34:32.739022 systemd[1]: Started cri-containerd-733cc7e58bf6c5d2747856de2e73a2ea56fdf5d0744ce390b56658edef7d3bba.scope. Sep 13 01:34:32.759299 env[1477]: time="2025-09-13T01:34:32.758815964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:32.759515 env[1477]: time="2025-09-13T01:34:32.759264565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:32.759515 env[1477]: time="2025-09-13T01:34:32.759292725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:32.759600 env[1477]: time="2025-09-13T01:34:32.759497165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687 pid=2517 runtime=io.containerd.runc.v2 Sep 13 01:34:32.770959 env[1477]: time="2025-09-13T01:34:32.770908263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4j2sv,Uid:008e7945-3a7f-44ae-a922-eae810b09261,Namespace:kube-system,Attempt:0,} returns sandbox id \"733cc7e58bf6c5d2747856de2e73a2ea56fdf5d0744ce390b56658edef7d3bba\"" Sep 13 01:34:32.778830 systemd[1]: Started cri-containerd-0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687.scope. Sep 13 01:34:32.784137 env[1477]: time="2025-09-13T01:34:32.783144767Z" level=info msg="CreateContainer within sandbox \"733cc7e58bf6c5d2747856de2e73a2ea56fdf5d0744ce390b56658edef7d3bba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:34:32.806039 env[1477]: time="2025-09-13T01:34:32.805958893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swhrp,Uid:52a91b44-6450-4141-9606-8bc18f1baad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\"" Sep 13 01:34:32.810735 env[1477]: time="2025-09-13T01:34:32.810658303Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:34:32.825790 env[1477]: time="2025-09-13T01:34:32.825738453Z" level=info msg="CreateContainer within sandbox \"733cc7e58bf6c5d2747856de2e73a2ea56fdf5d0744ce390b56658edef7d3bba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2635db4f8dc52ef83cc7d3501d59c577066b8de43213a36f044cf8ef02781ad8\"" Sep 13 01:34:32.826594 env[1477]: time="2025-09-13T01:34:32.826554375Z" level=info msg="StartContainer for \"2635db4f8dc52ef83cc7d3501d59c577066b8de43213a36f044cf8ef02781ad8\"" Sep 13 01:34:32.833374 env[1477]: time="2025-09-13T01:34:32.833331108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cs8dn,Uid:96864318-acd9-4ff0-955f-ec3fddddff45,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:32.844801 systemd[1]: Started cri-containerd-2635db4f8dc52ef83cc7d3501d59c577066b8de43213a36f044cf8ef02781ad8.scope. Sep 13 01:34:32.874679 env[1477]: time="2025-09-13T01:34:32.874579392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:32.874679 env[1477]: time="2025-09-13T01:34:32.874640152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:32.874897 env[1477]: time="2025-09-13T01:34:32.874651512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:32.876210 env[1477]: time="2025-09-13T01:34:32.875091993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c pid=2596 runtime=io.containerd.runc.v2 Sep 13 01:34:32.884033 env[1477]: time="2025-09-13T01:34:32.883977651Z" level=info msg="StartContainer for \"2635db4f8dc52ef83cc7d3501d59c577066b8de43213a36f044cf8ef02781ad8\" returns successfully" Sep 13 01:34:32.894577 systemd[1]: Started cri-containerd-2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c.scope. Sep 13 01:34:32.936413 env[1477]: time="2025-09-13T01:34:32.936361676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cs8dn,Uid:96864318-acd9-4ff0-955f-ec3fddddff45,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c\"" Sep 13 01:34:33.590034 kubelet[2401]: I0913 01:34:33.589964 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4j2sv" podStartSLOduration=1.5899303 podStartE2EDuration="1.5899303s" podCreationTimestamp="2025-09-13 01:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:33.589749339 +0000 UTC m=+7.197143836" watchObservedRunningTime="2025-09-13 01:34:33.5899303 +0000 UTC m=+7.197324717" Sep 13 01:34:39.231897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457318427.mount: Deactivated successfully. Sep 13 01:34:41.388537 env[1477]: time="2025-09-13T01:34:41.388326220Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.399144 env[1477]: time="2025-09-13T01:34:41.399080696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.403652 env[1477]: time="2025-09-13T01:34:41.403611391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:41.404369 env[1477]: time="2025-09-13T01:34:41.404337314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 01:34:41.406772 env[1477]: time="2025-09-13T01:34:41.406369921Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:34:41.408272 env[1477]: time="2025-09-13T01:34:41.408237967Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:34:41.435490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664527635.mount: Deactivated successfully. Sep 13 01:34:41.440200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439226641.mount: Deactivated successfully. Sep 13 01:34:41.451807 env[1477]: time="2025-09-13T01:34:41.451752872Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\"" Sep 13 01:34:41.455851 env[1477]: time="2025-09-13T01:34:41.452424434Z" level=info msg="StartContainer for \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\"" Sep 13 01:34:41.500496 systemd[1]: Started cri-containerd-9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f.scope. Sep 13 01:34:41.532462 env[1477]: time="2025-09-13T01:34:41.532410381Z" level=info msg="StartContainer for \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\" returns successfully" Sep 13 01:34:41.540457 systemd[1]: cri-containerd-9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f.scope: Deactivated successfully. Sep 13 01:34:42.432052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f-rootfs.mount: Deactivated successfully. Sep 13 01:34:43.304913 env[1477]: time="2025-09-13T01:34:43.303559945Z" level=info msg="shim disconnected" id=9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f Sep 13 01:34:43.304913 env[1477]: time="2025-09-13T01:34:43.303611585Z" level=warning msg="cleaning up after shim disconnected" id=9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f namespace=k8s.io Sep 13 01:34:43.304913 env[1477]: time="2025-09-13T01:34:43.303624385Z" level=info msg="cleaning up dead shim" Sep 13 01:34:43.316062 env[1477]: time="2025-09-13T01:34:43.316014944Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2816 runtime=io.containerd.runc.v2\n" Sep 13 01:34:43.598976 env[1477]: time="2025-09-13T01:34:43.598517997Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:34:43.638740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717900112.mount: Deactivated successfully. Sep 13 01:34:43.671535 env[1477]: time="2025-09-13T01:34:43.671464627Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\"" Sep 13 01:34:43.672008 env[1477]: time="2025-09-13T01:34:43.671971909Z" level=info msg="StartContainer for \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\"" Sep 13 01:34:43.693911 systemd[1]: Started cri-containerd-7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95.scope. Sep 13 01:34:43.734037 env[1477]: time="2025-09-13T01:34:43.733986185Z" level=info msg="StartContainer for \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\" returns successfully" Sep 13 01:34:43.739930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:34:43.740163 systemd[1]: Stopped systemd-sysctl.service. Sep 13 01:34:43.741057 systemd[1]: Stopping systemd-sysctl.service... Sep 13 01:34:43.744668 systemd[1]: Starting systemd-sysctl.service... Sep 13 01:34:43.745015 systemd[1]: cri-containerd-7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95.scope: Deactivated successfully. Sep 13 01:34:43.753991 systemd[1]: Finished systemd-sysctl.service. Sep 13 01:34:43.781732 env[1477]: time="2025-09-13T01:34:43.781684335Z" level=info msg="shim disconnected" id=7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95 Sep 13 01:34:43.782053 env[1477]: time="2025-09-13T01:34:43.782033256Z" level=warning msg="cleaning up after shim disconnected" id=7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95 namespace=k8s.io Sep 13 01:34:43.782180 env[1477]: time="2025-09-13T01:34:43.782162857Z" level=info msg="cleaning up dead shim" Sep 13 01:34:43.790369 env[1477]: time="2025-09-13T01:34:43.790321242Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:34:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2879 runtime=io.containerd.runc.v2\n" Sep 13 01:34:44.605502 env[1477]: time="2025-09-13T01:34:44.605457326Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:34:44.635719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95-rootfs.mount: Deactivated successfully. Sep 13 01:34:44.685046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239891988.mount: Deactivated successfully. Sep 13 01:34:44.697081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225476096.mount: Deactivated successfully. Sep 13 01:34:44.730067 env[1477]: time="2025-09-13T01:34:44.730010869Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\"" Sep 13 01:34:44.732065 env[1477]: time="2025-09-13T01:34:44.730975472Z" level=info msg="StartContainer for \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\"" Sep 13 01:34:44.754497 systemd[1]: Started cri-containerd-4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986.scope. Sep 13 01:34:44.794392 systemd[1]: cri-containerd-4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986.scope: Deactivated successfully. Sep 13 01:34:44.798805 env[1477]: time="2025-09-13T01:34:44.798754720Z" level=info msg="StartContainer for \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\" returns successfully" Sep 13 01:34:44.851899 env[1477]: time="2025-09-13T01:34:44.851839484Z" level=info msg="shim disconnected" id=4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986 Sep 13 01:34:44.852254 env[1477]: time="2025-09-13T01:34:44.852233725Z" level=warning msg="cleaning up after shim disconnected" id=4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986 namespace=k8s.io Sep 13 01:34:44.852349 env[1477]: time="2025-09-13T01:34:44.852334885Z" level=info msg="cleaning up dead shim" Sep 13 01:34:44.868989 env[1477]: time="2025-09-13T01:34:44.868879096Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:34:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2937 runtime=io.containerd.runc.v2\n" Sep 13 01:34:45.247927 env[1477]: time="2025-09-13T01:34:45.247865521Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:45.255533 env[1477]: time="2025-09-13T01:34:45.255497544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:45.263539 env[1477]: time="2025-09-13T01:34:45.263497328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 01:34:45.264063 env[1477]: time="2025-09-13T01:34:45.264030489Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 01:34:45.268088 env[1477]: time="2025-09-13T01:34:45.268040261Z" level=info msg="CreateContainer within sandbox \"2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:34:45.300244 env[1477]: time="2025-09-13T01:34:45.300190918Z" level=info msg="CreateContainer within sandbox \"2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\"" Sep 13 01:34:45.301176 env[1477]: time="2025-09-13T01:34:45.301147680Z" level=info msg="StartContainer for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\"" Sep 13 01:34:45.316522 systemd[1]: Started cri-containerd-1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49.scope. Sep 13 01:34:45.346440 env[1477]: time="2025-09-13T01:34:45.346386616Z" level=info msg="StartContainer for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" returns successfully" Sep 13 01:34:45.607762 env[1477]: time="2025-09-13T01:34:45.607212316Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:34:45.619313 kubelet[2401]: I0913 01:34:45.619246 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cs8dn" podStartSLOduration=1.291901339 podStartE2EDuration="13.619229952s" podCreationTimestamp="2025-09-13 01:34:32 +0000 UTC" firstStartedPulling="2025-09-13 01:34:32.937612839 +0000 UTC m=+6.545007216" lastFinishedPulling="2025-09-13 01:34:45.264941452 +0000 UTC m=+18.872335829" observedRunningTime="2025-09-13 01:34:45.618705711 +0000 UTC m=+19.226100128" watchObservedRunningTime="2025-09-13 01:34:45.619229952 +0000 UTC m=+19.226624329" Sep 13 01:34:45.644586 env[1477]: time="2025-09-13T01:34:45.644529548Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\"" Sep 13 01:34:45.645442 env[1477]: time="2025-09-13T01:34:45.645404831Z" level=info msg="StartContainer for \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\"" Sep 13 01:34:45.679623 systemd[1]: Started cri-containerd-078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f.scope. Sep 13 01:34:45.685188 systemd[1]: run-containerd-runc-k8s.io-078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f-runc.Jbm3XG.mount: Deactivated successfully. Sep 13 01:34:45.741090 env[1477]: time="2025-09-13T01:34:45.741029837Z" level=info msg="StartContainer for \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\" returns successfully" Sep 13 01:34:45.743736 systemd[1]: cri-containerd-078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f.scope: Deactivated successfully. Sep 13 01:34:45.991710 env[1477]: time="2025-09-13T01:34:45.991634787Z" level=info msg="shim disconnected" id=078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f Sep 13 01:34:45.991710 env[1477]: time="2025-09-13T01:34:45.991692987Z" level=warning msg="cleaning up after shim disconnected" id=078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f namespace=k8s.io Sep 13 01:34:45.991710 env[1477]: time="2025-09-13T01:34:45.991702667Z" level=info msg="cleaning up dead shim" Sep 13 01:34:45.998618 env[1477]: time="2025-09-13T01:34:45.998566048Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:34:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3029 runtime=io.containerd.runc.v2\n" Sep 13 01:34:46.617867 env[1477]: time="2025-09-13T01:34:46.617820692Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:34:46.635834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f-rootfs.mount: Deactivated successfully. Sep 13 01:34:46.664741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360057078.mount: Deactivated successfully. Sep 13 01:34:46.670747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405933809.mount: Deactivated successfully. Sep 13 01:34:46.682186 env[1477]: time="2025-09-13T01:34:46.682133799Z" level=info msg="CreateContainer within sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\"" Sep 13 01:34:46.683328 env[1477]: time="2025-09-13T01:34:46.683297083Z" level=info msg="StartContainer for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\"" Sep 13 01:34:46.699702 systemd[1]: Started cri-containerd-02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482.scope. Sep 13 01:34:46.739865 env[1477]: time="2025-09-13T01:34:46.739802967Z" level=info msg="StartContainer for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" returns successfully" Sep 13 01:34:46.829126 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:34:46.917693 kubelet[2401]: I0913 01:34:46.916027 2401 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 01:34:46.959040 systemd[1]: Created slice kubepods-burstable-pod2c15a070_58f9_46a3_96c7_a379fe715288.slice. Sep 13 01:34:46.967607 systemd[1]: Created slice kubepods-burstable-podc7040f20_6afc_4a51_be45_8bbb36fa8f9b.slice. Sep 13 01:34:47.046660 kubelet[2401]: I0913 01:34:47.046623 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvt4k\" (UniqueName: \"kubernetes.io/projected/c7040f20-6afc-4a51-be45-8bbb36fa8f9b-kube-api-access-fvt4k\") pod \"coredns-668d6bf9bc-9jgkv\" (UID: \"c7040f20-6afc-4a51-be45-8bbb36fa8f9b\") " pod="kube-system/coredns-668d6bf9bc-9jgkv" Sep 13 01:34:47.046871 kubelet[2401]: I0913 01:34:47.046852 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c15a070-58f9-46a3-96c7-a379fe715288-config-volume\") pod \"coredns-668d6bf9bc-8c6gv\" (UID: \"2c15a070-58f9-46a3-96c7-a379fe715288\") " pod="kube-system/coredns-668d6bf9bc-8c6gv" Sep 13 01:34:47.046973 kubelet[2401]: I0913 01:34:47.046960 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7040f20-6afc-4a51-be45-8bbb36fa8f9b-config-volume\") pod \"coredns-668d6bf9bc-9jgkv\" (UID: \"c7040f20-6afc-4a51-be45-8bbb36fa8f9b\") " pod="kube-system/coredns-668d6bf9bc-9jgkv" Sep 13 01:34:47.047085 kubelet[2401]: I0913 01:34:47.047071 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5s8s\" (UniqueName: \"kubernetes.io/projected/2c15a070-58f9-46a3-96c7-a379fe715288-kube-api-access-w5s8s\") pod \"coredns-668d6bf9bc-8c6gv\" (UID: \"2c15a070-58f9-46a3-96c7-a379fe715288\") " pod="kube-system/coredns-668d6bf9bc-8c6gv" Sep 13 01:34:47.264084 env[1477]: time="2025-09-13T01:34:47.264033715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c6gv,Uid:2c15a070-58f9-46a3-96c7-a379fe715288,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:47.271577 env[1477]: time="2025-09-13T01:34:47.271302415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jgkv,Uid:c7040f20-6afc-4a51-be45-8bbb36fa8f9b,Namespace:kube-system,Attempt:0,}" Sep 13 01:34:47.556125 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 13 01:34:47.632993 kubelet[2401]: I0913 01:34:47.632924 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-swhrp" podStartSLOduration=7.0350371 podStartE2EDuration="15.632905041s" podCreationTimestamp="2025-09-13 01:34:32 +0000 UTC" firstStartedPulling="2025-09-13 01:34:32.807818137 +0000 UTC m=+6.415212554" lastFinishedPulling="2025-09-13 01:34:41.405686078 +0000 UTC m=+15.013080495" observedRunningTime="2025-09-13 01:34:47.632012399 +0000 UTC m=+21.239406816" watchObservedRunningTime="2025-09-13 01:34:47.632905041 +0000 UTC m=+21.240299458" Sep 13 01:34:49.252855 systemd-networkd[1637]: cilium_host: Link UP Sep 13 01:34:49.252959 systemd-networkd[1637]: cilium_net: Link UP Sep 13 01:34:49.252962 systemd-networkd[1637]: cilium_net: Gained carrier Sep 13 01:34:49.253075 systemd-networkd[1637]: cilium_host: Gained carrier Sep 13 01:34:49.256147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 01:34:49.256457 systemd-networkd[1637]: cilium_host: Gained IPv6LL Sep 13 01:34:49.338278 systemd-networkd[1637]: cilium_net: Gained IPv6LL Sep 13 01:34:49.443542 systemd-networkd[1637]: cilium_vxlan: Link UP Sep 13 01:34:49.443550 systemd-networkd[1637]: cilium_vxlan: Gained carrier Sep 13 01:34:49.735144 kernel: NET: Registered PF_ALG protocol family Sep 13 01:34:50.545448 systemd-networkd[1637]: lxc_health: Link UP Sep 13 01:34:50.567628 systemd-networkd[1637]: lxc_health: Gained carrier Sep 13 01:34:50.568392 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:34:50.853443 systemd-networkd[1637]: lxcc665bbbd0c35: Link UP Sep 13 01:34:50.867138 kernel: eth0: renamed from tmpca793 Sep 13 01:34:50.879948 systemd-networkd[1637]: lxc3050b12f1708: Link UP Sep 13 01:34:50.887363 systemd-networkd[1637]: lxcc665bbbd0c35: Gained carrier Sep 13 01:34:50.888131 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc665bbbd0c35: link becomes ready Sep 13 01:34:50.898621 kernel: eth0: renamed from tmp3909b Sep 13 01:34:50.910135 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3050b12f1708: link becomes ready Sep 13 01:34:50.913582 systemd-networkd[1637]: lxc3050b12f1708: Gained carrier Sep 13 01:34:50.930271 systemd-networkd[1637]: cilium_vxlan: Gained IPv6LL Sep 13 01:34:51.954268 systemd-networkd[1637]: lxcc665bbbd0c35: Gained IPv6LL Sep 13 01:34:52.082270 systemd-networkd[1637]: lxc3050b12f1708: Gained IPv6LL Sep 13 01:34:52.530246 systemd-networkd[1637]: lxc_health: Gained IPv6LL Sep 13 01:34:54.670168 env[1477]: time="2025-09-13T01:34:54.669203448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:54.670168 env[1477]: time="2025-09-13T01:34:54.669254088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:54.670168 env[1477]: time="2025-09-13T01:34:54.669265208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:54.670168 env[1477]: time="2025-09-13T01:34:54.669509089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca79342042f20684edf5d7312e0f25e0635682a0a0afdde96f337cea7fbd721e pid=3578 runtime=io.containerd.runc.v2 Sep 13 01:34:54.679711 env[1477]: time="2025-09-13T01:34:54.677770108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:34:54.679711 env[1477]: time="2025-09-13T01:34:54.677816068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:34:54.679711 env[1477]: time="2025-09-13T01:34:54.677840308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:34:54.679711 env[1477]: time="2025-09-13T01:34:54.678286149Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3909bffbc22c91d69600b16c81b5a55d33bc390eefa940bc183844e2c2359ed9 pid=3594 runtime=io.containerd.runc.v2 Sep 13 01:34:54.701614 systemd[1]: run-containerd-runc-k8s.io-ca79342042f20684edf5d7312e0f25e0635682a0a0afdde96f337cea7fbd721e-runc.qFnBmK.mount: Deactivated successfully. Sep 13 01:34:54.706919 systemd[1]: Started cri-containerd-ca79342042f20684edf5d7312e0f25e0635682a0a0afdde96f337cea7fbd721e.scope. Sep 13 01:34:54.719086 systemd[1]: Started cri-containerd-3909bffbc22c91d69600b16c81b5a55d33bc390eefa940bc183844e2c2359ed9.scope. Sep 13 01:34:54.783345 env[1477]: time="2025-09-13T01:34:54.783296758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9jgkv,Uid:c7040f20-6afc-4a51-be45-8bbb36fa8f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3909bffbc22c91d69600b16c81b5a55d33bc390eefa940bc183844e2c2359ed9\"" Sep 13 01:34:54.784262 env[1477]: time="2025-09-13T01:34:54.784230560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8c6gv,Uid:2c15a070-58f9-46a3-96c7-a379fe715288,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca79342042f20684edf5d7312e0f25e0635682a0a0afdde96f337cea7fbd721e\"" Sep 13 01:34:54.791363 env[1477]: time="2025-09-13T01:34:54.791323417Z" level=info msg="CreateContainer within sandbox \"3909bffbc22c91d69600b16c81b5a55d33bc390eefa940bc183844e2c2359ed9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:34:54.796377 env[1477]: time="2025-09-13T01:34:54.796336268Z" level=info msg="CreateContainer within sandbox \"ca79342042f20684edf5d7312e0f25e0635682a0a0afdde96f337cea7fbd721e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:34:54.844316 env[1477]: time="2025-09-13T01:34:54.844257502Z" level=info msg="CreateContainer within sandbox \"3909bffbc22c91d69600b16c81b5a55d33bc390eefa940bc183844e2c2359ed9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2da603b8229f3c552be773b53af7043b5a6b279fe745102b035a1a62b0254cc2\"" Sep 13 01:34:54.846494 env[1477]: time="2025-09-13T01:34:54.846459227Z" level=info msg="StartContainer for \"2da603b8229f3c552be773b53af7043b5a6b279fe745102b035a1a62b0254cc2\"" Sep 13 01:34:54.853030 env[1477]: time="2025-09-13T01:34:54.852981842Z" level=info msg="CreateContainer within sandbox \"ca79342042f20684edf5d7312e0f25e0635682a0a0afdde96f337cea7fbd721e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2916a670e17afcf71158bba9aeeb630f0d41d17969201e5b8fa601c047685ba2\"" Sep 13 01:34:54.853968 env[1477]: time="2025-09-13T01:34:54.853934485Z" level=info msg="StartContainer for \"2916a670e17afcf71158bba9aeeb630f0d41d17969201e5b8fa601c047685ba2\"" Sep 13 01:34:54.868588 systemd[1]: Started cri-containerd-2da603b8229f3c552be773b53af7043b5a6b279fe745102b035a1a62b0254cc2.scope. Sep 13 01:34:54.876626 systemd[1]: Started cri-containerd-2916a670e17afcf71158bba9aeeb630f0d41d17969201e5b8fa601c047685ba2.scope. Sep 13 01:34:54.932804 env[1477]: time="2025-09-13T01:34:54.932670551Z" level=info msg="StartContainer for \"2916a670e17afcf71158bba9aeeb630f0d41d17969201e5b8fa601c047685ba2\" returns successfully" Sep 13 01:34:54.937890 env[1477]: time="2025-09-13T01:34:54.937829483Z" level=info msg="StartContainer for \"2da603b8229f3c552be773b53af7043b5a6b279fe745102b035a1a62b0254cc2\" returns successfully" Sep 13 01:34:55.643240 kubelet[2401]: I0913 01:34:55.643170 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8c6gv" podStartSLOduration=23.643153752 podStartE2EDuration="23.643153752s" podCreationTimestamp="2025-09-13 01:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:55.642462671 +0000 UTC m=+29.249857088" watchObservedRunningTime="2025-09-13 01:34:55.643153752 +0000 UTC m=+29.250548129" Sep 13 01:34:55.680978 kubelet[2401]: I0913 01:34:55.680909 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9jgkv" podStartSLOduration=23.680890319 podStartE2EDuration="23.680890319s" podCreationTimestamp="2025-09-13 01:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:34:55.679840237 +0000 UTC m=+29.287234614" watchObservedRunningTime="2025-09-13 01:34:55.680890319 +0000 UTC m=+29.288284736" Sep 13 01:36:22.419191 update_engine[1463]: I0913 01:36:22.418808 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 01:36:22.419191 update_engine[1463]: I0913 01:36:22.418859 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 01:36:22.419191 update_engine[1463]: I0913 01:36:22.419011 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 01:36:22.419629 update_engine[1463]: I0913 01:36:22.419391 1463 omaha_request_params.cc:62] Current group set to lts Sep 13 01:36:22.419629 update_engine[1463]: I0913 01:36:22.419494 1463 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 01:36:22.419629 update_engine[1463]: I0913 01:36:22.419499 1463 update_attempter.cc:643] Scheduling an action processor start. Sep 13 01:36:22.419629 update_engine[1463]: I0913 01:36:22.419514 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 01:36:22.419629 update_engine[1463]: I0913 01:36:22.419538 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 01:36:22.420050 locksmithd[1554]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 01:36:22.456409 update_engine[1463]: I0913 01:36:22.456359 1463 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 01:36:22.456409 update_engine[1463]: I0913 01:36:22.456396 1463 omaha_request_action.cc:271] Request: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: Sep 13 01:36:22.456409 update_engine[1463]: I0913 01:36:22.456403 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:22.525718 update_engine[1463]: I0913 01:36:22.525678 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:22.525967 update_engine[1463]: I0913 01:36:22.525945 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:22.566136 update_engine[1463]: E0913 01:36:22.566084 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:22.566256 update_engine[1463]: I0913 01:36:22.566211 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 01:36:32.397178 update_engine[1463]: I0913 01:36:32.397125 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:32.397624 update_engine[1463]: I0913 01:36:32.397351 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:32.397624 update_engine[1463]: I0913 01:36:32.397543 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:32.432196 update_engine[1463]: E0913 01:36:32.432157 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:32.432329 update_engine[1463]: I0913 01:36:32.432267 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 01:36:42.394500 update_engine[1463]: I0913 01:36:42.394452 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:42.394832 update_engine[1463]: I0913 01:36:42.394666 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:42.394900 update_engine[1463]: I0913 01:36:42.394865 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:42.404312 update_engine[1463]: E0913 01:36:42.404282 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:42.404396 update_engine[1463]: I0913 01:36:42.404381 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 01:36:52.396580 update_engine[1463]: I0913 01:36:52.396528 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:52.396965 update_engine[1463]: I0913 01:36:52.396744 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:52.396965 update_engine[1463]: I0913 01:36:52.396941 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:52.407197 update_engine[1463]: E0913 01:36:52.407160 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:52.407322 update_engine[1463]: I0913 01:36:52.407300 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 01:36:52.407355 update_engine[1463]: I0913 01:36:52.407320 1463 omaha_request_action.cc:621] Omaha request response: Sep 13 01:36:52.407417 update_engine[1463]: E0913 01:36:52.407402 1463 omaha_request_action.cc:640] Omaha request network transfer failed. Sep 13 01:36:52.407446 update_engine[1463]: I0913 01:36:52.407420 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 01:36:52.407446 update_engine[1463]: I0913 01:36:52.407424 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:36:52.407446 update_engine[1463]: I0913 01:36:52.407428 1463 update_attempter.cc:306] Processing Done. Sep 13 01:36:52.407446 update_engine[1463]: E0913 01:36:52.407439 1463 update_attempter.cc:619] Update failed. Sep 13 01:36:52.407446 update_engine[1463]: I0913 01:36:52.407441 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 01:36:52.407446 update_engine[1463]: I0913 01:36:52.407445 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 01:36:52.407583 update_engine[1463]: I0913 01:36:52.407449 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 01:36:52.407583 update_engine[1463]: I0913 01:36:52.407508 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 01:36:52.407583 update_engine[1463]: I0913 01:36:52.407526 1463 omaha_request_action.cc:270] Posting an Omaha request to disabled Sep 13 01:36:52.407583 update_engine[1463]: I0913 01:36:52.407531 1463 omaha_request_action.cc:271] Request: Sep 13 01:36:52.407583 update_engine[1463]: Sep 13 01:36:52.407583 update_engine[1463]: Sep 13 01:36:52.407583 update_engine[1463]: Sep 13 01:36:52.407583 update_engine[1463]: Sep 13 01:36:52.407583 update_engine[1463]: Sep 13 01:36:52.407583 update_engine[1463]: Sep 13 01:36:52.407583 update_engine[1463]: I0913 01:36:52.407534 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 01:36:52.407786 update_engine[1463]: I0913 01:36:52.407656 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 01:36:52.408059 update_engine[1463]: I0913 01:36:52.407804 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 01:36:52.408198 locksmithd[1554]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 01:36:52.412544 update_engine[1463]: E0913 01:36:52.412513 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 01:36:52.412632 update_engine[1463]: I0913 01:36:52.412616 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 01:36:52.412632 update_engine[1463]: I0913 01:36:52.412626 1463 omaha_request_action.cc:621] Omaha request response: Sep 13 01:36:52.412632 update_engine[1463]: I0913 01:36:52.412631 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:36:52.412716 update_engine[1463]: I0913 01:36:52.412635 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 01:36:52.412716 update_engine[1463]: I0913 01:36:52.412638 1463 update_attempter.cc:306] Processing Done. Sep 13 01:36:52.412716 update_engine[1463]: I0913 01:36:52.412642 1463 update_attempter.cc:310] Error event sent. Sep 13 01:36:52.412716 update_engine[1463]: I0913 01:36:52.412650 1463 update_check_scheduler.cc:74] Next update check in 48m22s Sep 13 01:36:52.412981 locksmithd[1554]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 01:37:01.689556 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:45214.service. Sep 13 01:37:02.101579 sshd[3755]: Accepted publickey for core from 10.200.16.10 port 45214 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:02.103408 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:02.108354 systemd[1]: Started session-8.scope. Sep 13 01:37:02.108669 systemd-logind[1462]: New session 8 of user core. Sep 13 01:37:02.478088 sshd[3755]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:02.480970 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:45214.service: Deactivated successfully. Sep 13 01:37:02.481713 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:37:02.482165 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:37:02.482836 systemd-logind[1462]: Removed session 8. Sep 13 01:37:07.547453 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:45226.service. Sep 13 01:37:07.962995 sshd[3770]: Accepted publickey for core from 10.200.16.10 port 45226 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:07.964650 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:07.969001 systemd[1]: Started session-9.scope. Sep 13 01:37:07.969331 systemd-logind[1462]: New session 9 of user core. Sep 13 01:37:08.330983 sshd[3770]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:08.334152 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:37:08.334157 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:45226.service: Deactivated successfully. Sep 13 01:37:08.334844 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:37:08.335567 systemd-logind[1462]: Removed session 9. Sep 13 01:37:13.401382 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:37438.service. Sep 13 01:37:13.818315 sshd[3782]: Accepted publickey for core from 10.200.16.10 port 37438 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:13.819706 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:13.824385 systemd[1]: Started session-10.scope. Sep 13 01:37:13.824691 systemd-logind[1462]: New session 10 of user core. Sep 13 01:37:14.207337 sshd[3782]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:14.210156 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:37:14.210242 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:37:14.210869 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:37438.service: Deactivated successfully. Sep 13 01:37:14.211953 systemd-logind[1462]: Removed session 10. Sep 13 01:37:19.278701 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:37444.service. Sep 13 01:37:19.695091 sshd[3794]: Accepted publickey for core from 10.200.16.10 port 37444 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:19.696151 sshd[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:19.700695 systemd[1]: Started session-11.scope. Sep 13 01:37:19.700988 systemd-logind[1462]: New session 11 of user core. Sep 13 01:37:20.094340 sshd[3794]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:20.096926 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:37:20.097544 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:37444.service: Deactivated successfully. Sep 13 01:37:20.098257 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:37:20.099210 systemd-logind[1462]: Removed session 11. Sep 13 01:37:20.163164 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:52166.service. Sep 13 01:37:20.578200 sshd[3807]: Accepted publickey for core from 10.200.16.10 port 52166 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:20.579477 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:20.583973 systemd[1]: Started session-12.scope. Sep 13 01:37:20.584304 systemd-logind[1462]: New session 12 of user core. Sep 13 01:37:21.002981 sshd[3807]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:21.005905 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:52166.service: Deactivated successfully. Sep 13 01:37:21.007008 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:37:21.007840 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:37:21.008697 systemd-logind[1462]: Removed session 12. Sep 13 01:37:21.073079 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:52168.service. Sep 13 01:37:21.487503 sshd[3817]: Accepted publickey for core from 10.200.16.10 port 52168 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:21.489132 sshd[3817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:21.493465 systemd[1]: Started session-13.scope. Sep 13 01:37:21.494031 systemd-logind[1462]: New session 13 of user core. Sep 13 01:37:21.868374 sshd[3817]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:21.871080 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:37:21.871277 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:52168.service: Deactivated successfully. Sep 13 01:37:21.871951 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:37:21.872743 systemd-logind[1462]: Removed session 13. Sep 13 01:37:26.937879 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:52170.service. Sep 13 01:37:27.353330 sshd[3831]: Accepted publickey for core from 10.200.16.10 port 52170 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:27.354682 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:27.358765 systemd-logind[1462]: New session 14 of user core. Sep 13 01:37:27.359277 systemd[1]: Started session-14.scope. Sep 13 01:37:27.722658 sshd[3831]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:27.725358 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:37:27.725445 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:37:27.726195 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:52170.service: Deactivated successfully. Sep 13 01:37:27.727227 systemd-logind[1462]: Removed session 14. Sep 13 01:37:32.792640 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:57762.service. Sep 13 01:37:33.206449 sshd[3843]: Accepted publickey for core from 10.200.16.10 port 57762 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:33.208124 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:33.212487 systemd[1]: Started session-15.scope. Sep 13 01:37:33.213276 systemd-logind[1462]: New session 15 of user core. Sep 13 01:37:33.581552 sshd[3843]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:33.584373 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:57762.service: Deactivated successfully. Sep 13 01:37:33.585090 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:37:33.586000 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:37:33.586755 systemd-logind[1462]: Removed session 15. Sep 13 01:37:33.650473 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:57772.service. Sep 13 01:37:34.063577 sshd[3857]: Accepted publickey for core from 10.200.16.10 port 57772 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:34.065328 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:34.069125 systemd-logind[1462]: New session 16 of user core. Sep 13 01:37:34.069773 systemd[1]: Started session-16.scope. Sep 13 01:37:34.470408 sshd[3857]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:34.473038 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:37:34.473313 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:57772.service: Deactivated successfully. Sep 13 01:37:34.474014 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:37:34.474843 systemd-logind[1462]: Removed session 16. Sep 13 01:37:34.539193 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:57776.service. Sep 13 01:37:34.955696 sshd[3866]: Accepted publickey for core from 10.200.16.10 port 57776 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:34.957379 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:34.961770 systemd[1]: Started session-17.scope. Sep 13 01:37:34.962322 systemd-logind[1462]: New session 17 of user core. Sep 13 01:37:35.830473 sshd[3866]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:35.833033 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:57776.service: Deactivated successfully. Sep 13 01:37:35.833798 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:37:35.834382 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:37:35.835352 systemd-logind[1462]: Removed session 17. Sep 13 01:37:35.899468 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:57790.service. Sep 13 01:37:36.313946 sshd[3883]: Accepted publickey for core from 10.200.16.10 port 57790 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:36.315347 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:36.319831 systemd[1]: Started session-18.scope. Sep 13 01:37:36.320155 systemd-logind[1462]: New session 18 of user core. Sep 13 01:37:36.803753 sshd[3883]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:36.806524 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:57790.service: Deactivated successfully. Sep 13 01:37:36.807553 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:37:36.808325 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:37:36.809035 systemd-logind[1462]: Removed session 18. Sep 13 01:37:36.873555 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:57804.service. Sep 13 01:37:37.286968 sshd[3892]: Accepted publickey for core from 10.200.16.10 port 57804 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:37.286770 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:37.291136 systemd-logind[1462]: New session 19 of user core. Sep 13 01:37:37.291582 systemd[1]: Started session-19.scope. Sep 13 01:37:37.654201 sshd[3892]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:37.657488 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:57804.service: Deactivated successfully. Sep 13 01:37:37.658226 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:37:37.658755 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:37:37.659508 systemd-logind[1462]: Removed session 19. Sep 13 01:37:42.724526 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:57520.service. Sep 13 01:37:43.137586 sshd[3905]: Accepted publickey for core from 10.200.16.10 port 57520 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:43.139300 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:43.143574 systemd[1]: Started session-20.scope. Sep 13 01:37:43.144608 systemd-logind[1462]: New session 20 of user core. Sep 13 01:37:43.503263 sshd[3905]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:43.505715 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:57520.service: Deactivated successfully. Sep 13 01:37:43.506463 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:37:43.507032 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:37:43.507812 systemd-logind[1462]: Removed session 20. Sep 13 01:37:48.572506 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:57534.service. Sep 13 01:37:48.987512 sshd[3916]: Accepted publickey for core from 10.200.16.10 port 57534 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:48.989246 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:48.993052 systemd-logind[1462]: New session 21 of user core. Sep 13 01:37:48.993730 systemd[1]: Started session-21.scope. Sep 13 01:37:49.358556 sshd[3916]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:49.361308 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:37:49.361972 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:57534.service: Deactivated successfully. Sep 13 01:37:49.362755 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:37:49.363467 systemd-logind[1462]: Removed session 21. Sep 13 01:37:54.428501 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:54830.service. Sep 13 01:37:54.846801 sshd[3929]: Accepted publickey for core from 10.200.16.10 port 54830 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:54.848471 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:54.852801 systemd[1]: Started session-22.scope. Sep 13 01:37:54.853142 systemd-logind[1462]: New session 22 of user core. Sep 13 01:37:55.222777 sshd[3929]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:55.225372 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:54830.service: Deactivated successfully. Sep 13 01:37:55.226126 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:37:55.226702 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:37:55.227609 systemd-logind[1462]: Removed session 22. Sep 13 01:37:55.291335 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:54840.service. Sep 13 01:37:55.712364 sshd[3941]: Accepted publickey for core from 10.200.16.10 port 54840 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:37:55.714128 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:37:55.718163 systemd-logind[1462]: New session 23 of user core. Sep 13 01:37:55.718620 systemd[1]: Started session-23.scope. Sep 13 01:37:58.177590 env[1477]: time="2025-09-13T01:37:58.177518594Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:37:58.177909 env[1477]: time="2025-09-13T01:37:58.177826554Z" level=info msg="StopContainer for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" with timeout 30 (s)" Sep 13 01:37:58.178241 env[1477]: time="2025-09-13T01:37:58.178203314Z" level=info msg="Stop container \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" with signal terminated" Sep 13 01:37:58.188407 env[1477]: time="2025-09-13T01:37:58.188355037Z" level=info msg="StopContainer for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" with timeout 2 (s)" Sep 13 01:37:58.188697 env[1477]: time="2025-09-13T01:37:58.188670357Z" level=info msg="Stop container \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" with signal terminated" Sep 13 01:37:58.194931 systemd-networkd[1637]: lxc_health: Link DOWN Sep 13 01:37:58.194937 systemd-networkd[1637]: lxc_health: Lost carrier Sep 13 01:37:58.198601 systemd[1]: cri-containerd-1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49.scope: Deactivated successfully. Sep 13 01:37:58.222238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49-rootfs.mount: Deactivated successfully. Sep 13 01:37:58.224279 systemd[1]: cri-containerd-02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482.scope: Deactivated successfully. Sep 13 01:37:58.224612 systemd[1]: cri-containerd-02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482.scope: Consumed 6.624s CPU time. Sep 13 01:37:58.247470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482-rootfs.mount: Deactivated successfully. Sep 13 01:37:58.265764 env[1477]: time="2025-09-13T01:37:58.265716940Z" level=info msg="shim disconnected" id=02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482 Sep 13 01:37:58.266134 env[1477]: time="2025-09-13T01:37:58.266111140Z" level=warning msg="cleaning up after shim disconnected" id=02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482 namespace=k8s.io Sep 13 01:37:58.266262 env[1477]: time="2025-09-13T01:37:58.266245660Z" level=info msg="cleaning up dead shim" Sep 13 01:37:58.266431 env[1477]: time="2025-09-13T01:37:58.265716780Z" level=info msg="shim disconnected" id=1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49 Sep 13 01:37:58.266479 env[1477]: time="2025-09-13T01:37:58.266432780Z" level=warning msg="cleaning up after shim disconnected" id=1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49 namespace=k8s.io Sep 13 01:37:58.266479 env[1477]: time="2025-09-13T01:37:58.266442100Z" level=info msg="cleaning up dead shim" Sep 13 01:37:58.273732 env[1477]: time="2025-09-13T01:37:58.273690102Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\n" Sep 13 01:37:58.275979 env[1477]: time="2025-09-13T01:37:58.275934823Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4006 runtime=io.containerd.runc.v2\n" Sep 13 01:37:58.281167 env[1477]: time="2025-09-13T01:37:58.281129664Z" level=info msg="StopContainer for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" returns successfully" Sep 13 01:37:58.281913 env[1477]: time="2025-09-13T01:37:58.281878984Z" level=info msg="StopPodSandbox for \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\"" Sep 13 01:37:58.281979 env[1477]: time="2025-09-13T01:37:58.281942784Z" level=info msg="Container to stop \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:37:58.281979 env[1477]: time="2025-09-13T01:37:58.281960384Z" level=info msg="Container to stop \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:37:58.281979 env[1477]: time="2025-09-13T01:37:58.281971824Z" level=info msg="Container to stop \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:37:58.285659 env[1477]: time="2025-09-13T01:37:58.281983344Z" level=info msg="Container to stop \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:37:58.285659 env[1477]: time="2025-09-13T01:37:58.281994424Z" level=info msg="Container to stop \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:37:58.284758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687-shm.mount: Deactivated successfully. Sep 13 01:37:58.292092 env[1477]: time="2025-09-13T01:37:58.289335627Z" level=info msg="StopContainer for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" returns successfully" Sep 13 01:37:58.296994 systemd[1]: cri-containerd-0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687.scope: Deactivated successfully. Sep 13 01:37:58.298409 env[1477]: time="2025-09-13T01:37:58.298379429Z" level=info msg="StopPodSandbox for \"2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c\"" Sep 13 01:37:58.298572 env[1477]: time="2025-09-13T01:37:58.298552349Z" level=info msg="Container to stop \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:37:58.300391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c-shm.mount: Deactivated successfully. Sep 13 01:37:58.309596 systemd[1]: cri-containerd-2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c.scope: Deactivated successfully. Sep 13 01:37:58.341774 env[1477]: time="2025-09-13T01:37:58.341716322Z" level=info msg="shim disconnected" id=2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c Sep 13 01:37:58.341774 env[1477]: time="2025-09-13T01:37:58.341770082Z" level=warning msg="cleaning up after shim disconnected" id=2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c namespace=k8s.io Sep 13 01:37:58.341774 env[1477]: time="2025-09-13T01:37:58.341780002Z" level=info msg="cleaning up dead shim" Sep 13 01:37:58.342015 env[1477]: time="2025-09-13T01:37:58.341956882Z" level=info msg="shim disconnected" id=0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687 Sep 13 01:37:58.342015 env[1477]: time="2025-09-13T01:37:58.341998882Z" level=warning msg="cleaning up after shim disconnected" id=0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687 namespace=k8s.io Sep 13 01:37:58.342015 env[1477]: time="2025-09-13T01:37:58.342007322Z" level=info msg="cleaning up dead shim" Sep 13 01:37:58.354039 env[1477]: time="2025-09-13T01:37:58.353986006Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" Sep 13 01:37:58.354210 env[1477]: time="2025-09-13T01:37:58.354182446Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Sep 13 01:37:58.354484 env[1477]: time="2025-09-13T01:37:58.354456646Z" level=info msg="TearDown network for sandbox \"2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c\" successfully" Sep 13 01:37:58.354524 env[1477]: time="2025-09-13T01:37:58.354483686Z" level=info msg="StopPodSandbox for \"2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c\" returns successfully" Sep 13 01:37:58.354890 env[1477]: time="2025-09-13T01:37:58.354763366Z" level=info msg="TearDown network for sandbox \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" successfully" Sep 13 01:37:58.354890 env[1477]: time="2025-09-13T01:37:58.354789366Z" level=info msg="StopPodSandbox for \"0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687\" returns successfully" Sep 13 01:37:58.480905 kubelet[2401]: I0913 01:37:58.480867 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cni-path\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481361 kubelet[2401]: I0913 01:37:58.481344 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5l65\" (UniqueName: \"kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-kube-api-access-v5l65\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481464 kubelet[2401]: I0913 01:37:58.481453 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96864318-acd9-4ff0-955f-ec3fddddff45-cilium-config-path\") pod \"96864318-acd9-4ff0-955f-ec3fddddff45\" (UID: \"96864318-acd9-4ff0-955f-ec3fddddff45\") " Sep 13 01:37:58.481548 kubelet[2401]: I0913 01:37:58.481537 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-kernel\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481629 kubelet[2401]: I0913 01:37:58.481619 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a91b44-6450-4141-9606-8bc18f1baad6-clustermesh-secrets\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481711 kubelet[2401]: I0913 01:37:58.481700 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-hubble-tls\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481798 kubelet[2401]: I0913 01:37:58.481788 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-run\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481877 kubelet[2401]: I0913 01:37:58.481867 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-net\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.481953 kubelet[2401]: I0913 01:37:58.481939 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-lib-modules\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482025 kubelet[2401]: I0913 01:37:58.482015 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-cgroup\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482116 kubelet[2401]: I0913 01:37:58.482105 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-config-path\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482200 kubelet[2401]: I0913 01:37:58.482189 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-etc-cni-netd\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482280 kubelet[2401]: I0913 01:37:58.482270 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-xtables-lock\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482356 kubelet[2401]: I0913 01:37:58.482345 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-bpf-maps\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482430 kubelet[2401]: I0913 01:37:58.482420 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkcgm\" (UniqueName: \"kubernetes.io/projected/96864318-acd9-4ff0-955f-ec3fddddff45-kube-api-access-bkcgm\") pod \"96864318-acd9-4ff0-955f-ec3fddddff45\" (UID: \"96864318-acd9-4ff0-955f-ec3fddddff45\") " Sep 13 01:37:58.482505 kubelet[2401]: I0913 01:37:58.482494 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-hostproc\") pod \"52a91b44-6450-4141-9606-8bc18f1baad6\" (UID: \"52a91b44-6450-4141-9606-8bc18f1baad6\") " Sep 13 01:37:58.482617 kubelet[2401]: I0913 01:37:58.481176 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cni-path" (OuterVolumeSpecName: "cni-path") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.482711 kubelet[2401]: I0913 01:37:58.482604 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-hostproc" (OuterVolumeSpecName: "hostproc") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.483062 kubelet[2401]: I0913 01:37:58.483036 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.486036 kubelet[2401]: I0913 01:37:58.484824 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96864318-acd9-4ff0-955f-ec3fddddff45-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96864318-acd9-4ff0-955f-ec3fddddff45" (UID: "96864318-acd9-4ff0-955f-ec3fddddff45"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:37:58.486036 kubelet[2401]: I0913 01:37:58.484897 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.486036 kubelet[2401]: I0913 01:37:58.485136 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.486911 kubelet[2401]: I0913 01:37:58.486879 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:37:58.486993 kubelet[2401]: I0913 01:37:58.486932 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.486993 kubelet[2401]: I0913 01:37:58.486950 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.486993 kubelet[2401]: I0913 01:37:58.486964 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.488738 kubelet[2401]: I0913 01:37:58.488714 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.489094 kubelet[2401]: I0913 01:37:58.488963 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:37:58.490089 kubelet[2401]: I0913 01:37:58.489058 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-kube-api-access-v5l65" (OuterVolumeSpecName: "kube-api-access-v5l65") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "kube-api-access-v5l65". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:37:58.490359 kubelet[2401]: I0913 01:37:58.490324 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52a91b44-6450-4141-9606-8bc18f1baad6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:37:58.491713 kubelet[2401]: I0913 01:37:58.491671 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96864318-acd9-4ff0-955f-ec3fddddff45-kube-api-access-bkcgm" (OuterVolumeSpecName: "kube-api-access-bkcgm") pod "96864318-acd9-4ff0-955f-ec3fddddff45" (UID: "96864318-acd9-4ff0-955f-ec3fddddff45"). InnerVolumeSpecName "kube-api-access-bkcgm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:37:58.492469 kubelet[2401]: I0913 01:37:58.492449 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "52a91b44-6450-4141-9606-8bc18f1baad6" (UID: "52a91b44-6450-4141-9606-8bc18f1baad6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:37:58.523795 systemd[1]: Removed slice kubepods-besteffort-pod96864318_acd9_4ff0_955f_ec3fddddff45.slice. Sep 13 01:37:58.525570 systemd[1]: Removed slice kubepods-burstable-pod52a91b44_6450_4141_9606_8bc18f1baad6.slice. Sep 13 01:37:58.525651 systemd[1]: kubepods-burstable-pod52a91b44_6450_4141_9606_8bc18f1baad6.slice: Consumed 6.717s CPU time. Sep 13 01:37:58.583222 kubelet[2401]: I0913 01:37:58.583178 2401 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583222 kubelet[2401]: I0913 01:37:58.583216 2401 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a91b44-6450-4141-9606-8bc18f1baad6-clustermesh-secrets\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583222 kubelet[2401]: I0913 01:37:58.583227 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96864318-acd9-4ff0-955f-ec3fddddff45-cilium-config-path\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583222 kubelet[2401]: I0913 01:37:58.583236 2401 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-hubble-tls\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583252 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-run\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583261 2401 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-host-proc-sys-net\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583269 2401 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-lib-modules\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583278 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-cgroup\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583286 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a91b44-6450-4141-9606-8bc18f1baad6-cilium-config-path\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583294 2401 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-etc-cni-netd\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583302 2401 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-xtables-lock\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583445 kubelet[2401]: I0913 01:37:58.583310 2401 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-bpf-maps\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583624 kubelet[2401]: I0913 01:37:58.583318 2401 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bkcgm\" (UniqueName: \"kubernetes.io/projected/96864318-acd9-4ff0-955f-ec3fddddff45-kube-api-access-bkcgm\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583624 kubelet[2401]: I0913 01:37:58.583325 2401 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-hostproc\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583624 kubelet[2401]: I0913 01:37:58.583334 2401 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a91b44-6450-4141-9606-8bc18f1baad6-cni-path\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.583624 kubelet[2401]: I0913 01:37:58.583342 2401 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5l65\" (UniqueName: \"kubernetes.io/projected/52a91b44-6450-4141-9606-8bc18f1baad6-kube-api-access-v5l65\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:37:58.963063 kubelet[2401]: I0913 01:37:58.962720 2401 scope.go:117] "RemoveContainer" containerID="1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49" Sep 13 01:37:58.964908 env[1477]: time="2025-09-13T01:37:58.964296064Z" level=info msg="RemoveContainer for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\"" Sep 13 01:37:58.973486 env[1477]: time="2025-09-13T01:37:58.973438347Z" level=info msg="RemoveContainer for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" returns successfully" Sep 13 01:37:58.973740 kubelet[2401]: I0913 01:37:58.973719 2401 scope.go:117] "RemoveContainer" containerID="1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49" Sep 13 01:37:58.975107 env[1477]: time="2025-09-13T01:37:58.974072267Z" level=error msg="ContainerStatus for \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\": not found" Sep 13 01:37:58.975195 kubelet[2401]: E0913 01:37:58.974333 2401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\": not found" containerID="1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49" Sep 13 01:37:58.975195 kubelet[2401]: I0913 01:37:58.974375 2401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49"} err="failed to get container status \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e0f1ecf51398f7b23d3e3bf19c223c724a9138565955ef1b577d1c0cce0ac49\": not found" Sep 13 01:37:58.975195 kubelet[2401]: I0913 01:37:58.974464 2401 scope.go:117] "RemoveContainer" containerID="02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482" Sep 13 01:37:58.976423 env[1477]: time="2025-09-13T01:37:58.976391628Z" level=info msg="RemoveContainer for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\"" Sep 13 01:37:58.991118 env[1477]: time="2025-09-13T01:37:58.991054672Z" level=info msg="RemoveContainer for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" returns successfully" Sep 13 01:37:58.991457 kubelet[2401]: I0913 01:37:58.991427 2401 scope.go:117] "RemoveContainer" containerID="078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f" Sep 13 01:37:58.995862 env[1477]: time="2025-09-13T01:37:58.995585833Z" level=info msg="RemoveContainer for \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\"" Sep 13 01:37:59.005836 env[1477]: time="2025-09-13T01:37:59.005710996Z" level=info msg="RemoveContainer for \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\" returns successfully" Sep 13 01:37:59.006146 kubelet[2401]: I0913 01:37:59.006123 2401 scope.go:117] "RemoveContainer" containerID="4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986" Sep 13 01:37:59.007607 env[1477]: time="2025-09-13T01:37:59.007370837Z" level=info msg="RemoveContainer for \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\"" Sep 13 01:37:59.018192 env[1477]: time="2025-09-13T01:37:59.018153560Z" level=info msg="RemoveContainer for \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\" returns successfully" Sep 13 01:37:59.018539 kubelet[2401]: I0913 01:37:59.018512 2401 scope.go:117] "RemoveContainer" containerID="7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95" Sep 13 01:37:59.019686 env[1477]: time="2025-09-13T01:37:59.019651480Z" level=info msg="RemoveContainer for \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\"" Sep 13 01:37:59.028784 env[1477]: time="2025-09-13T01:37:59.028739883Z" level=info msg="RemoveContainer for \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\" returns successfully" Sep 13 01:37:59.029002 kubelet[2401]: I0913 01:37:59.028980 2401 scope.go:117] "RemoveContainer" containerID="9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f" Sep 13 01:37:59.030019 env[1477]: time="2025-09-13T01:37:59.029988203Z" level=info msg="RemoveContainer for \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\"" Sep 13 01:37:59.045457 env[1477]: time="2025-09-13T01:37:59.045417328Z" level=info msg="RemoveContainer for \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\" returns successfully" Sep 13 01:37:59.045872 kubelet[2401]: I0913 01:37:59.045845 2401 scope.go:117] "RemoveContainer" containerID="02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482" Sep 13 01:37:59.046178 env[1477]: time="2025-09-13T01:37:59.046085808Z" level=error msg="ContainerStatus for \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\": not found" Sep 13 01:37:59.046419 kubelet[2401]: E0913 01:37:59.046394 2401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\": not found" containerID="02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482" Sep 13 01:37:59.046482 kubelet[2401]: I0913 01:37:59.046426 2401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482"} err="failed to get container status \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\": rpc error: code = NotFound desc = an error occurred when try to find container \"02c68a670ddca74b95f43f84e5be2ad442401c5eda37d0b807a9ae8f7ebc8482\": not found" Sep 13 01:37:59.046482 kubelet[2401]: I0913 01:37:59.046446 2401 scope.go:117] "RemoveContainer" containerID="078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f" Sep 13 01:37:59.046683 env[1477]: time="2025-09-13T01:37:59.046641968Z" level=error msg="ContainerStatus for \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\": not found" Sep 13 01:37:59.046844 kubelet[2401]: E0913 01:37:59.046822 2401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\": not found" containerID="078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f" Sep 13 01:37:59.046922 kubelet[2401]: I0913 01:37:59.046846 2401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f"} err="failed to get container status \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"078d9b67b5ef73b2867b3a7882989951c3a282cfe550cf1b0f04b6a2e8b5db5f\": not found" Sep 13 01:37:59.046922 kubelet[2401]: I0913 01:37:59.046862 2401 scope.go:117] "RemoveContainer" containerID="4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986" Sep 13 01:37:59.047175 env[1477]: time="2025-09-13T01:37:59.047069888Z" level=error msg="ContainerStatus for \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\": not found" Sep 13 01:37:59.047348 kubelet[2401]: E0913 01:37:59.047327 2401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\": not found" containerID="4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986" Sep 13 01:37:59.047416 kubelet[2401]: I0913 01:37:59.047346 2401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986"} err="failed to get container status \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f8ed54339a8cf377e4c1e11b44744f6baf5796a102379594f44ae23ed7b6986\": not found" Sep 13 01:37:59.047416 kubelet[2401]: I0913 01:37:59.047358 2401 scope.go:117] "RemoveContainer" containerID="7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95" Sep 13 01:37:59.047607 env[1477]: time="2025-09-13T01:37:59.047567649Z" level=error msg="ContainerStatus for \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\": not found" Sep 13 01:37:59.047747 kubelet[2401]: E0913 01:37:59.047727 2401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\": not found" containerID="7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95" Sep 13 01:37:59.047812 kubelet[2401]: I0913 01:37:59.047746 2401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95"} err="failed to get container status \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\": rpc error: code = NotFound desc = an error occurred when try to find container \"7da2bd465e0e0d1b4dc19ea7019f544be4a8326381d774a4387977a6cac25b95\": not found" Sep 13 01:37:59.047812 kubelet[2401]: I0913 01:37:59.047758 2401 scope.go:117] "RemoveContainer" containerID="9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f" Sep 13 01:37:59.047993 env[1477]: time="2025-09-13T01:37:59.047955369Z" level=error msg="ContainerStatus for \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\": not found" Sep 13 01:37:59.048165 kubelet[2401]: E0913 01:37:59.048137 2401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\": not found" containerID="9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f" Sep 13 01:37:59.048237 kubelet[2401]: I0913 01:37:59.048164 2401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f"} err="failed to get container status \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a08b5ebc6ec156aa2e349c5846be72bf5aeab9db1347c389b19a9fa12fdf43f\": not found" Sep 13 01:37:59.163883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fe51fdead979db63511c9b20b34cab287ed096c67d8cf257493505e24db612c-rootfs.mount: Deactivated successfully. Sep 13 01:37:59.163978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0307630d5a4608efa6edf9bf19bc37a57b46c3d8d2b96d6de5db718af2a51687-rootfs.mount: Deactivated successfully. Sep 13 01:37:59.164052 systemd[1]: var-lib-kubelet-pods-96864318\x2dacd9\x2d4ff0\x2d955f\x2dec3fddddff45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbkcgm.mount: Deactivated successfully. Sep 13 01:37:59.164121 systemd[1]: var-lib-kubelet-pods-52a91b44\x2d6450\x2d4141\x2d9606\x2d8bc18f1baad6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv5l65.mount: Deactivated successfully. Sep 13 01:37:59.164175 systemd[1]: var-lib-kubelet-pods-52a91b44\x2d6450\x2d4141\x2d9606\x2d8bc18f1baad6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:37:59.164226 systemd[1]: var-lib-kubelet-pods-52a91b44\x2d6450\x2d4141\x2d9606\x2d8bc18f1baad6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:00.169335 sshd[3941]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:00.171879 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:38:00.172038 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:38:00.172235 systemd[1]: session-23.scope: Consumed 1.534s CPU time. Sep 13 01:38:00.172975 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:54840.service: Deactivated successfully. Sep 13 01:38:00.173957 systemd-logind[1462]: Removed session 23. Sep 13 01:38:00.237723 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:43462.service. Sep 13 01:38:00.519678 kubelet[2401]: I0913 01:38:00.519641 2401 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a91b44-6450-4141-9606-8bc18f1baad6" path="/var/lib/kubelet/pods/52a91b44-6450-4141-9606-8bc18f1baad6/volumes" Sep 13 01:38:00.520210 kubelet[2401]: I0913 01:38:00.520190 2401 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96864318-acd9-4ff0-955f-ec3fddddff45" path="/var/lib/kubelet/pods/96864318-acd9-4ff0-955f-ec3fddddff45/volumes" Sep 13 01:38:00.650487 sshd[4097]: Accepted publickey for core from 10.200.16.10 port 43462 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:00.652121 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:00.657487 systemd[1]: Started session-24.scope. Sep 13 01:38:00.658159 systemd-logind[1462]: New session 24 of user core. Sep 13 01:38:01.628565 kubelet[2401]: E0913 01:38:01.628517 2401 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:01.942527 kubelet[2401]: I0913 01:38:01.942416 2401 memory_manager.go:355] "RemoveStaleState removing state" podUID="96864318-acd9-4ff0-955f-ec3fddddff45" containerName="cilium-operator" Sep 13 01:38:01.942689 kubelet[2401]: I0913 01:38:01.942676 2401 memory_manager.go:355] "RemoveStaleState removing state" podUID="52a91b44-6450-4141-9606-8bc18f1baad6" containerName="cilium-agent" Sep 13 01:38:01.944341 kubelet[2401]: I0913 01:38:01.944298 2401 status_manager.go:890] "Failed to get status for pod" podUID="a8ebc250-b546-4f2e-bf40-dbefb81730a0" pod="kube-system/cilium-pg5w6" err="pods \"cilium-pg5w6\" is forbidden: User \"system:node:ci-3510.3.8-n-8e33b0f951\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-8e33b0f951' and this object" Sep 13 01:38:01.949240 systemd[1]: Created slice kubepods-burstable-poda8ebc250_b546_4f2e_bf40_dbefb81730a0.slice. Sep 13 01:38:01.955574 kubelet[2401]: W0913 01:38:01.955533 2401 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.8-n-8e33b0f951" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-8e33b0f951' and this object Sep 13 01:38:01.956145 kubelet[2401]: E0913 01:38:01.956122 2401 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510.3.8-n-8e33b0f951\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-8e33b0f951' and this object" logger="UnhandledError" Sep 13 01:38:01.956336 kubelet[2401]: W0913 01:38:01.955777 2401 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.8-n-8e33b0f951" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-8e33b0f951' and this object Sep 13 01:38:01.956448 kubelet[2401]: E0913 01:38:01.956431 2401 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.8-n-8e33b0f951\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-8e33b0f951' and this object" logger="UnhandledError" Sep 13 01:38:02.001930 sshd[4097]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:02.004955 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:38:02.005537 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:43462.service: Deactivated successfully. Sep 13 01:38:02.006311 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:38:02.007044 systemd-logind[1462]: Removed session 24. Sep 13 01:38:02.070360 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:43464.service. Sep 13 01:38:02.110128 kubelet[2401]: I0913 01:38:02.109927 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-lib-modules\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110128 kubelet[2401]: I0913 01:38:02.109969 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-ipsec-secrets\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110128 kubelet[2401]: I0913 01:38:02.109996 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-kernel\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110128 kubelet[2401]: I0913 01:38:02.110013 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-xtables-lock\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110128 kubelet[2401]: I0913 01:38:02.110029 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vlk\" (UniqueName: \"kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-kube-api-access-j6vlk\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110381 kubelet[2401]: I0913 01:38:02.110047 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-run\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110381 kubelet[2401]: I0913 01:38:02.110069 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hostproc\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110381 kubelet[2401]: I0913 01:38:02.110084 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-etc-cni-netd\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110381 kubelet[2401]: I0913 01:38:02.110125 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-net\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110381 kubelet[2401]: I0913 01:38:02.110146 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-clustermesh-secrets\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110381 kubelet[2401]: I0913 01:38:02.110162 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-cgroup\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110511 kubelet[2401]: I0913 01:38:02.110178 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cni-path\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110511 kubelet[2401]: I0913 01:38:02.110202 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-bpf-maps\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110511 kubelet[2401]: I0913 01:38:02.110219 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-config-path\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.110511 kubelet[2401]: I0913 01:38:02.110234 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hubble-tls\") pod \"cilium-pg5w6\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " pod="kube-system/cilium-pg5w6" Sep 13 01:38:02.483522 sshd[4109]: Accepted publickey for core from 10.200.16.10 port 43464 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:02.484865 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:02.489584 systemd[1]: Started session-25.scope. Sep 13 01:38:02.489873 systemd-logind[1462]: New session 25 of user core. Sep 13 01:38:02.815883 kubelet[2401]: E0913 01:38:02.815770 2401 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-pg5w6" podUID="a8ebc250-b546-4f2e-bf40-dbefb81730a0" Sep 13 01:38:02.862819 sshd[4109]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:02.866185 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:43464.service: Deactivated successfully. Sep 13 01:38:02.866889 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:38:02.867226 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:38:02.867903 systemd-logind[1462]: Removed session 25. Sep 13 01:38:02.928879 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:43468.service. Sep 13 01:38:03.116421 kubelet[2401]: I0913 01:38:03.115878 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hostproc\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116421 kubelet[2401]: I0913 01:38:03.115924 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-kernel\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116421 kubelet[2401]: I0913 01:38:03.115942 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-etc-cni-netd\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116421 kubelet[2401]: I0913 01:38:03.115956 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cni-path\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116421 kubelet[2401]: I0913 01:38:03.115973 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-run\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116421 kubelet[2401]: I0913 01:38:03.115994 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hubble-tls\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116665 kubelet[2401]: I0913 01:38:03.116013 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-xtables-lock\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116665 kubelet[2401]: I0913 01:38:03.116030 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-lib-modules\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116665 kubelet[2401]: I0913 01:38:03.116045 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6vlk\" (UniqueName: \"kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-kube-api-access-j6vlk\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116665 kubelet[2401]: I0913 01:38:03.116063 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-net\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116665 kubelet[2401]: I0913 01:38:03.116082 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-clustermesh-secrets\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116665 kubelet[2401]: I0913 01:38:03.116108 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-cgroup\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116799 kubelet[2401]: I0913 01:38:03.116130 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-bpf-maps\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.116799 kubelet[2401]: I0913 01:38:03.116194 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.116799 kubelet[2401]: I0913 01:38:03.116220 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.116799 kubelet[2401]: I0913 01:38:03.116234 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.116799 kubelet[2401]: I0913 01:38:03.116250 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.116963 kubelet[2401]: I0913 01:38:03.116263 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.116963 kubelet[2401]: I0913 01:38:03.116276 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.121210 kubelet[2401]: I0913 01:38:03.119408 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:38:03.121210 kubelet[2401]: I0913 01:38:03.119465 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.120579 systemd[1]: var-lib-kubelet-pods-a8ebc250\x2db546\x2d4f2e\x2dbf40\x2ddbefb81730a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:03.123715 kubelet[2401]: I0913 01:38:03.123674 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.123801 kubelet[2401]: I0913 01:38:03.123787 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:38:03.123828 kubelet[2401]: I0913 01:38:03.123810 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.123854 kubelet[2401]: I0913 01:38:03.123826 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:03.123927 kubelet[2401]: I0913 01:38:03.123898 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-kube-api-access-j6vlk" (OuterVolumeSpecName: "kube-api-access-j6vlk") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "kube-api-access-j6vlk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:38:03.212332 kubelet[2401]: E0913 01:38:03.212294 2401 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:38:03.212453 kubelet[2401]: E0913 01:38:03.212387 2401 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-config-path podName:a8ebc250-b546-4f2e-bf40-dbefb81730a0 nodeName:}" failed. No retries permitted until 2025-09-13 01:38:03.712364768 +0000 UTC m=+217.319759145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-config-path") pod "cilium-pg5w6" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:38:03.216728 systemd[1]: var-lib-kubelet-pods-a8ebc250\x2db546\x2d4f2e\x2dbf40\x2ddbefb81730a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6vlk.mount: Deactivated successfully. Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216768 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-ipsec-secrets\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216843 2401 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hostproc\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216853 2401 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cni-path\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216862 2401 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216874 2401 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-etc-cni-netd\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216884 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-run\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216892 2401 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-hubble-tls\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218310 kubelet[2401]: I0913 01:38:03.216901 2401 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-xtables-lock\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.216825 systemd[1]: var-lib-kubelet-pods-a8ebc250\x2db546\x2d4f2e\x2dbf40\x2ddbefb81730a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:03.218553 kubelet[2401]: I0913 01:38:03.216909 2401 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-lib-modules\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218553 kubelet[2401]: I0913 01:38:03.216916 2401 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6vlk\" (UniqueName: \"kubernetes.io/projected/a8ebc250-b546-4f2e-bf40-dbefb81730a0-kube-api-access-j6vlk\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218553 kubelet[2401]: I0913 01:38:03.216925 2401 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-host-proc-sys-net\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218553 kubelet[2401]: I0913 01:38:03.216936 2401 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-clustermesh-secrets\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218553 kubelet[2401]: I0913 01:38:03.216944 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-cgroup\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.218553 kubelet[2401]: I0913 01:38:03.216952 2401 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8ebc250-b546-4f2e-bf40-dbefb81730a0-bpf-maps\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.222430 kubelet[2401]: I0913 01:38:03.222368 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:38:03.222550 systemd[1]: var-lib-kubelet-pods-a8ebc250\x2db546\x2d4f2e\x2dbf40\x2ddbefb81730a0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:03.317515 kubelet[2401]: I0913 01:38:03.317481 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.338916 sshd[4123]: Accepted publickey for core from 10.200.16.10 port 43468 ssh2: RSA SHA256:2vdFvqmv97G7XTFyIQCFZZcqRFoIpW6ty3nYdUf/oyk Sep 13 01:38:03.340254 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:38:03.344575 systemd[1]: Started session-26.scope. Sep 13 01:38:03.345031 systemd-logind[1462]: New session 26 of user core. Sep 13 01:38:03.821151 kubelet[2401]: I0913 01:38:03.821107 2401 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-config-path\") pod \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\" (UID: \"a8ebc250-b546-4f2e-bf40-dbefb81730a0\") " Sep 13 01:38:03.822820 kubelet[2401]: I0913 01:38:03.822785 2401 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8ebc250-b546-4f2e-bf40-dbefb81730a0" (UID: "a8ebc250-b546-4f2e-bf40-dbefb81730a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:38:03.921519 kubelet[2401]: I0913 01:38:03.921489 2401 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ebc250-b546-4f2e-bf40-dbefb81730a0-cilium-config-path\") on node \"ci-3510.3.8-n-8e33b0f951\" DevicePath \"\"" Sep 13 01:38:03.983077 systemd[1]: Removed slice kubepods-burstable-poda8ebc250_b546_4f2e_bf40_dbefb81730a0.slice. Sep 13 01:38:04.034838 systemd[1]: Created slice kubepods-burstable-pod9ebc2b5c_83a4_436a_b5cc_91b7bec80c50.slice. Sep 13 01:38:04.122787 kubelet[2401]: I0913 01:38:04.122677 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbxq\" (UniqueName: \"kubernetes.io/projected/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-kube-api-access-svbxq\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122787 kubelet[2401]: I0913 01:38:04.122722 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-cilium-run\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122787 kubelet[2401]: I0913 01:38:04.122743 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-cilium-cgroup\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122787 kubelet[2401]: I0913 01:38:04.122766 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-lib-modules\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122988 kubelet[2401]: I0913 01:38:04.122792 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-cni-path\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122988 kubelet[2401]: I0913 01:38:04.122809 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-etc-cni-netd\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122988 kubelet[2401]: I0913 01:38:04.122826 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-clustermesh-secrets\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122988 kubelet[2401]: I0913 01:38:04.122842 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-cilium-config-path\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122988 kubelet[2401]: I0913 01:38:04.122858 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-cilium-ipsec-secrets\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.122988 kubelet[2401]: I0913 01:38:04.122871 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-host-proc-sys-net\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.123155 kubelet[2401]: I0913 01:38:04.122885 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-hubble-tls\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.123155 kubelet[2401]: I0913 01:38:04.122899 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-host-proc-sys-kernel\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.123155 kubelet[2401]: I0913 01:38:04.122914 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-bpf-maps\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.123155 kubelet[2401]: I0913 01:38:04.122929 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-hostproc\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.123155 kubelet[2401]: I0913 01:38:04.122943 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ebc2b5c-83a4-436a-b5cc-91b7bec80c50-xtables-lock\") pod \"cilium-p88tk\" (UID: \"9ebc2b5c-83a4-436a-b5cc-91b7bec80c50\") " pod="kube-system/cilium-p88tk" Sep 13 01:38:04.338086 env[1477]: time="2025-09-13T01:38:04.337678188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p88tk,Uid:9ebc2b5c-83a4-436a-b5cc-91b7bec80c50,Namespace:kube-system,Attempt:0,}" Sep 13 01:38:04.368555 env[1477]: time="2025-09-13T01:38:04.368472277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:38:04.368694 env[1477]: time="2025-09-13T01:38:04.368564197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:38:04.368694 env[1477]: time="2025-09-13T01:38:04.368589197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:38:04.368930 env[1477]: time="2025-09-13T01:38:04.368880597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3 pid=4152 runtime=io.containerd.runc.v2 Sep 13 01:38:04.386268 systemd[1]: Started cri-containerd-1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3.scope. Sep 13 01:38:04.411173 env[1477]: time="2025-09-13T01:38:04.411126490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p88tk,Uid:9ebc2b5c-83a4-436a-b5cc-91b7bec80c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\"" Sep 13 01:38:04.414534 env[1477]: time="2025-09-13T01:38:04.414495931Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:38:04.446371 env[1477]: time="2025-09-13T01:38:04.446324661Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf\"" Sep 13 01:38:04.447316 env[1477]: time="2025-09-13T01:38:04.447288701Z" level=info msg="StartContainer for \"39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf\"" Sep 13 01:38:04.462305 systemd[1]: Started cri-containerd-39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf.scope. Sep 13 01:38:04.492745 env[1477]: time="2025-09-13T01:38:04.492702675Z" level=info msg="StartContainer for \"39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf\" returns successfully" Sep 13 01:38:04.497680 systemd[1]: cri-containerd-39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf.scope: Deactivated successfully. Sep 13 01:38:04.521357 kubelet[2401]: I0913 01:38:04.521322 2401 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ebc250-b546-4f2e-bf40-dbefb81730a0" path="/var/lib/kubelet/pods/a8ebc250-b546-4f2e-bf40-dbefb81730a0/volumes" Sep 13 01:38:04.549486 env[1477]: time="2025-09-13T01:38:04.549443412Z" level=info msg="shim disconnected" id=39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf Sep 13 01:38:04.549710 env[1477]: time="2025-09-13T01:38:04.549691572Z" level=warning msg="cleaning up after shim disconnected" id=39c92532e23318ec348b439dfffdd80945f1869ed31e92f76fbdb43966e48bbf namespace=k8s.io Sep 13 01:38:04.549770 env[1477]: time="2025-09-13T01:38:04.549756532Z" level=info msg="cleaning up dead shim" Sep 13 01:38:04.557463 env[1477]: time="2025-09-13T01:38:04.557422454Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4238 runtime=io.containerd.runc.v2\n" Sep 13 01:38:04.985706 env[1477]: time="2025-09-13T01:38:04.985664184Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:38:05.019392 env[1477]: time="2025-09-13T01:38:05.019333715Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02\"" Sep 13 01:38:05.020195 env[1477]: time="2025-09-13T01:38:05.020170035Z" level=info msg="StartContainer for \"1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02\"" Sep 13 01:38:05.036436 systemd[1]: Started cri-containerd-1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02.scope. Sep 13 01:38:05.074574 env[1477]: time="2025-09-13T01:38:05.074504411Z" level=info msg="StartContainer for \"1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02\" returns successfully" Sep 13 01:38:05.085801 systemd[1]: cri-containerd-1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02.scope: Deactivated successfully. Sep 13 01:38:05.118316 env[1477]: time="2025-09-13T01:38:05.118270625Z" level=info msg="shim disconnected" id=1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02 Sep 13 01:38:05.118591 env[1477]: time="2025-09-13T01:38:05.118573025Z" level=warning msg="cleaning up after shim disconnected" id=1d316b42c7fe05a521060e8c67aad74ee888deb80ad45ad99a3c67f27d253a02 namespace=k8s.io Sep 13 01:38:05.118659 env[1477]: time="2025-09-13T01:38:05.118646465Z" level=info msg="cleaning up dead shim" Sep 13 01:38:05.126157 env[1477]: time="2025-09-13T01:38:05.126095107Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4302 runtime=io.containerd.runc.v2\n" Sep 13 01:38:06.007208 env[1477]: time="2025-09-13T01:38:06.003508775Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:38:06.026372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046744289.mount: Deactivated successfully. Sep 13 01:38:06.033287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125453519.mount: Deactivated successfully. Sep 13 01:38:06.046274 env[1477]: time="2025-09-13T01:38:06.046222948Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1\"" Sep 13 01:38:06.047842 env[1477]: time="2025-09-13T01:38:06.047801789Z" level=info msg="StartContainer for \"8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1\"" Sep 13 01:38:06.064472 systemd[1]: Started cri-containerd-8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1.scope. Sep 13 01:38:06.095059 systemd[1]: cri-containerd-8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1.scope: Deactivated successfully. Sep 13 01:38:06.103065 env[1477]: time="2025-09-13T01:38:06.103005085Z" level=info msg="StartContainer for \"8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1\" returns successfully" Sep 13 01:38:06.131170 env[1477]: time="2025-09-13T01:38:06.131068254Z" level=info msg="shim disconnected" id=8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1 Sep 13 01:38:06.131170 env[1477]: time="2025-09-13T01:38:06.131164174Z" level=warning msg="cleaning up after shim disconnected" id=8eb385fac26fba459aa6bba1ebb2b605d39b75da91d0319b02385ab95971a4a1 namespace=k8s.io Sep 13 01:38:06.131170 env[1477]: time="2025-09-13T01:38:06.131174934Z" level=info msg="cleaning up dead shim" Sep 13 01:38:06.139326 env[1477]: time="2025-09-13T01:38:06.139279017Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4361 runtime=io.containerd.runc.v2\n" Sep 13 01:38:06.630338 kubelet[2401]: E0913 01:38:06.630292 2401 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:06.991552 env[1477]: time="2025-09-13T01:38:06.991505278Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:38:07.013024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085762007.mount: Deactivated successfully. Sep 13 01:38:07.022945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595984957.mount: Deactivated successfully. Sep 13 01:38:07.038693 env[1477]: time="2025-09-13T01:38:07.038634253Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b\"" Sep 13 01:38:07.040735 env[1477]: time="2025-09-13T01:38:07.040625053Z" level=info msg="StartContainer for \"ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b\"" Sep 13 01:38:07.056514 systemd[1]: Started cri-containerd-ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b.scope. Sep 13 01:38:07.085267 systemd[1]: cri-containerd-ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b.scope: Deactivated successfully. Sep 13 01:38:07.090179 env[1477]: time="2025-09-13T01:38:07.090139149Z" level=info msg="StartContainer for \"ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b\" returns successfully" Sep 13 01:38:07.126518 env[1477]: time="2025-09-13T01:38:07.126473200Z" level=info msg="shim disconnected" id=ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b Sep 13 01:38:07.126800 env[1477]: time="2025-09-13T01:38:07.126783080Z" level=warning msg="cleaning up after shim disconnected" id=ce7267f6ca9bc1da9800bfe901986d3c0cb5a5d718f4f01a0778e118d100239b namespace=k8s.io Sep 13 01:38:07.126888 env[1477]: time="2025-09-13T01:38:07.126875000Z" level=info msg="cleaning up dead shim" Sep 13 01:38:07.133559 env[1477]: time="2025-09-13T01:38:07.133516842Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4413 runtime=io.containerd.runc.v2\n" Sep 13 01:38:07.996442 env[1477]: time="2025-09-13T01:38:07.996390628Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:38:08.022602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052733880.mount: Deactivated successfully. Sep 13 01:38:08.028019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256434134.mount: Deactivated successfully. Sep 13 01:38:08.044192 env[1477]: time="2025-09-13T01:38:08.044142083Z" level=info msg="CreateContainer within sandbox \"1ccb12833dcec91900f75bd2332d44240f9fd2da994b0163d00638ed5c1e20c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1\"" Sep 13 01:38:08.044740 env[1477]: time="2025-09-13T01:38:08.044714803Z" level=info msg="StartContainer for \"a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1\"" Sep 13 01:38:08.059278 systemd[1]: Started cri-containerd-a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1.scope. Sep 13 01:38:08.097371 env[1477]: time="2025-09-13T01:38:08.097307139Z" level=info msg="StartContainer for \"a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1\" returns successfully" Sep 13 01:38:08.573316 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 13 01:38:09.017812 kubelet[2401]: I0913 01:38:09.017753 2401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p88tk" podStartSLOduration=5.017735945 podStartE2EDuration="5.017735945s" podCreationTimestamp="2025-09-13 01:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:38:09.017137705 +0000 UTC m=+222.624532122" watchObservedRunningTime="2025-09-13 01:38:09.017735945 +0000 UTC m=+222.625130362" Sep 13 01:38:09.754162 systemd[1]: run-containerd-runc-k8s.io-a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1-runc.mB2Bx5.mount: Deactivated successfully. Sep 13 01:38:11.277142 systemd-networkd[1637]: lxc_health: Link UP Sep 13 01:38:11.301140 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 01:38:11.309952 systemd-networkd[1637]: lxc_health: Gained carrier Sep 13 01:38:11.893730 systemd[1]: run-containerd-runc-k8s.io-a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1-runc.Kz7TtI.mount: Deactivated successfully. Sep 13 01:38:12.466301 systemd-networkd[1637]: lxc_health: Gained IPv6LL Sep 13 01:38:14.021671 systemd[1]: run-containerd-runc-k8s.io-a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1-runc.YrOaEf.mount: Deactivated successfully. Sep 13 01:38:16.144593 systemd[1]: run-containerd-runc-k8s.io-a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1-runc.iafgBf.mount: Deactivated successfully. Sep 13 01:38:18.262141 systemd[1]: run-containerd-runc-k8s.io-a6d5c64559ed8747425aa9317bc2a07a9c61ddb75746e8f3a8d16f3a9e369bb1-runc.af5Zll.mount: Deactivated successfully. Sep 13 01:38:18.379526 sshd[4123]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:18.382312 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:43468.service: Deactivated successfully. Sep 13 01:38:18.383003 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:38:18.384060 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:38:18.384830 systemd-logind[1462]: Removed session 26.