Aug 13 00:04:09.074757 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:04:09.074776 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 13 00:04:09.074784 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:04:09.074791 kernel: printk: bootconsole [pl11] enabled Aug 13 00:04:09.074796 kernel: efi: EFI v2.70 by EDK II Aug 13 00:04:09.074801 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Aug 13 00:04:09.074808 kernel: random: crng init done Aug 13 00:04:09.074813 kernel: ACPI: Early table checksum verification disabled Aug 13 00:04:09.074819 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:04:09.074824 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074830 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074835 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:04:09.074842 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074847 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074854 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074865 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074871 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074878 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074884 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:04:09.074890 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:04:09.074895 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:04:09.074901 kernel: NUMA: Failed to initialise from firmware Aug 13 00:04:09.074907 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:04:09.074912 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Aug 13 00:04:09.074918 kernel: Zone ranges: Aug 13 00:04:09.074924 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:04:09.074929 kernel: DMA32 empty Aug 13 00:04:09.074935 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:04:09.074942 kernel: Movable zone start for each node Aug 13 00:04:09.074948 kernel: Early memory node ranges Aug 13 00:04:09.074953 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:04:09.074959 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:04:09.074965 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:04:09.074970 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:04:09.074976 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:04:09.074981 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:04:09.074987 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:04:09.074993 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:04:09.074999 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:04:09.075005 kernel: psci: probing for conduit method from ACPI. Aug 13 00:04:09.075014 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:04:09.075020 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:04:09.075026 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:04:09.075032 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:04:09.075038 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Aug 13 00:04:09.075045 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Aug 13 00:04:09.075051 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 13 00:04:09.075057 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 13 00:04:09.075063 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:04:09.075069 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:04:09.075076 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:04:09.075082 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:04:09.075088 kernel: CPU features: detected: Spectre-BHB Aug 13 00:04:09.075094 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:04:09.075100 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:04:09.075106 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:04:09.075114 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:04:09.075120 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:04:09.084076 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:04:09.084084 kernel: Policy zone: Normal Aug 13 00:04:09.084093 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:04:09.084101 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:04:09.084107 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:04:09.084114 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:04:09.084130 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:04:09.084138 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Aug 13 00:04:09.084144 kernel: Memory: 3986872K/4194160K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 207288K reserved, 0K cma-reserved) Aug 13 00:04:09.084157 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:04:09.084164 kernel: trace event string verifier disabled Aug 13 00:04:09.084170 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:04:09.084177 kernel: rcu: RCU event tracing is enabled. Aug 13 00:04:09.084183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:04:09.084190 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:04:09.084196 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:04:09.084202 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:04:09.084208 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:04:09.084214 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:04:09.084220 kernel: GICv3: 960 SPIs implemented Aug 13 00:04:09.084228 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:04:09.084234 kernel: GICv3: Distributor has no Range Selector support Aug 13 00:04:09.084241 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:04:09.084247 kernel: GICv3: 16 PPIs implemented Aug 13 00:04:09.084253 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:04:09.084259 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:04:09.084266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:04:09.084272 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:04:09.084278 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:04:09.084285 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:04:09.084291 kernel: Console: colour dummy device 80x25 Aug 13 00:04:09.084299 kernel: printk: console [tty1] enabled Aug 13 00:04:09.084306 kernel: ACPI: Core revision 20210730 Aug 13 00:04:09.084312 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:04:09.084319 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:04:09.084325 kernel: LSM: Security Framework initializing Aug 13 00:04:09.084332 kernel: SELinux: Initializing. Aug 13 00:04:09.084338 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:04:09.084346 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:04:09.084352 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:04:09.084360 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:04:09.084367 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:04:09.084373 kernel: Remapping and enabling EFI services. Aug 13 00:04:09.084380 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:04:09.084386 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:04:09.084392 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:04:09.084399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:04:09.084405 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:04:09.084411 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:04:09.084420 kernel: SMP: Total of 2 processors activated. Aug 13 00:04:09.084426 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:04:09.084433 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:04:09.084439 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:04:09.084445 kernel: CPU features: detected: CRC32 instructions Aug 13 00:04:09.084452 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:04:09.084459 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:04:09.084465 kernel: CPU features: detected: Privileged Access Never Aug 13 00:04:09.084472 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:04:09.084479 kernel: alternatives: patching kernel code Aug 13 00:04:09.084486 kernel: devtmpfs: initialized Aug 13 00:04:09.084497 kernel: KASLR enabled Aug 13 00:04:09.084505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:04:09.084512 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:04:09.084519 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:04:09.084526 kernel: SMBIOS 3.1.0 present. Aug 13 00:04:09.084532 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:04:09.084539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:04:09.084546 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:04:09.084554 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:04:09.084562 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:04:09.084568 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:04:09.084575 kernel: audit: type=2000 audit(0.092:1): state=initialized audit_enabled=0 res=1 Aug 13 00:04:09.084582 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:04:09.084589 kernel: cpuidle: using governor menu Aug 13 00:04:09.084595 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:04:09.084603 kernel: ASID allocator initialised with 32768 entries Aug 13 00:04:09.084610 kernel: ACPI: bus type PCI registered Aug 13 00:04:09.084617 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:04:09.084624 kernel: Serial: AMBA PL011 UART driver Aug 13 00:04:09.084631 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:04:09.084638 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:04:09.084644 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:04:09.084651 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:04:09.084658 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:04:09.084666 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:04:09.084672 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:04:09.084679 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:04:09.084685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:04:09.084692 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:04:09.084698 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:04:09.084705 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:04:09.084712 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:04:09.084718 kernel: ACPI: Interpreter enabled Aug 13 00:04:09.084726 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:04:09.084733 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:04:09.084740 kernel: printk: console [ttyAMA0] enabled Aug 13 00:04:09.084747 kernel: printk: bootconsole [pl11] disabled Aug 13 00:04:09.084754 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:04:09.084760 kernel: iommu: Default domain type: Translated Aug 13 00:04:09.084767 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:04:09.084774 kernel: vgaarb: loaded Aug 13 00:04:09.084780 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:04:09.084788 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:04:09.084796 kernel: PTP clock support registered Aug 13 00:04:09.084802 kernel: Registered efivars operations Aug 13 00:04:09.084809 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:04:09.084815 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:04:09.084822 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:04:09.084828 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:04:09.084835 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:04:09.084842 kernel: pnp: PnP ACPI init Aug 13 00:04:09.084850 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:04:09.084857 kernel: NET: Registered PF_INET protocol family Aug 13 00:04:09.084864 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:04:09.084870 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:04:09.084878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:04:09.084885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:04:09.084892 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:04:09.084898 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:04:09.084905 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:04:09.084915 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:04:09.084922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:04:09.084928 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:04:09.084935 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:04:09.084942 kernel: kvm [1]: HYP mode not available Aug 13 00:04:09.084948 kernel: Initialise system trusted keyrings Aug 13 00:04:09.084955 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:04:09.084962 kernel: Key type asymmetric registered Aug 13 00:04:09.084968 kernel: Asymmetric key parser 'x509' registered Aug 13 00:04:09.084977 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:04:09.084983 kernel: io scheduler mq-deadline registered Aug 13 00:04:09.084990 kernel: io scheduler kyber registered Aug 13 00:04:09.084997 kernel: io scheduler bfq registered Aug 13 00:04:09.085003 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:04:09.085010 kernel: thunder_xcv, ver 1.0 Aug 13 00:04:09.085016 kernel: thunder_bgx, ver 1.0 Aug 13 00:04:09.085023 kernel: nicpf, ver 1.0 Aug 13 00:04:09.085030 kernel: nicvf, ver 1.0 Aug 13 00:04:09.085195 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:04:09.085262 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:04:08 UTC (1755043448) Aug 13 00:04:09.085272 kernel: efifb: probing for efifb Aug 13 00:04:09.085279 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:04:09.085286 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:04:09.085292 kernel: efifb: scrolling: redraw Aug 13 00:04:09.085299 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:04:09.085306 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:04:09.085315 kernel: fb0: EFI VGA frame buffer device Aug 13 00:04:09.085321 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:04:09.085328 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:04:09.085335 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:04:09.085342 kernel: Segment Routing with IPv6 Aug 13 00:04:09.085349 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:04:09.085356 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:04:09.085362 kernel: Key type dns_resolver registered Aug 13 00:04:09.085369 kernel: registered taskstats version 1 Aug 13 00:04:09.085377 kernel: Loading compiled-in X.509 certificates Aug 13 00:04:09.085384 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 13 00:04:09.085390 kernel: Key type .fscrypt registered Aug 13 00:04:09.085397 kernel: Key type fscrypt-provisioning registered Aug 13 00:04:09.085404 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:04:09.085411 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:04:09.085417 kernel: ima: No architecture policies found Aug 13 00:04:09.085424 kernel: clk: Disabling unused clocks Aug 13 00:04:09.085431 kernel: Freeing unused kernel memory: 36416K Aug 13 00:04:09.085439 kernel: Run /init as init process Aug 13 00:04:09.085445 kernel: with arguments: Aug 13 00:04:09.085452 kernel: /init Aug 13 00:04:09.085459 kernel: with environment: Aug 13 00:04:09.085466 kernel: HOME=/ Aug 13 00:04:09.085472 kernel: TERM=linux Aug 13 00:04:09.085479 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:04:09.085488 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:04:09.085498 systemd[1]: Detected virtualization microsoft. Aug 13 00:04:09.085506 systemd[1]: Detected architecture arm64. Aug 13 00:04:09.085513 systemd[1]: Running in initrd. Aug 13 00:04:09.085519 systemd[1]: No hostname configured, using default hostname. Aug 13 00:04:09.085526 systemd[1]: Hostname set to . Aug 13 00:04:09.085534 systemd[1]: Initializing machine ID from random generator. Aug 13 00:04:09.085541 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:04:09.085548 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:04:09.085556 systemd[1]: Reached target cryptsetup.target. Aug 13 00:04:09.085563 systemd[1]: Reached target paths.target. Aug 13 00:04:09.085570 systemd[1]: Reached target slices.target. Aug 13 00:04:09.085577 systemd[1]: Reached target swap.target. Aug 13 00:04:09.085584 systemd[1]: Reached target timers.target. Aug 13 00:04:09.085592 systemd[1]: Listening on iscsid.socket. Aug 13 00:04:09.085599 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:04:09.085606 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:04:09.085614 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:04:09.085621 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:04:09.085629 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:04:09.085636 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:04:09.085643 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:04:09.085650 systemd[1]: Reached target sockets.target. Aug 13 00:04:09.085657 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:04:09.085664 systemd[1]: Finished network-cleanup.service. Aug 13 00:04:09.085672 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:04:09.085680 systemd[1]: Starting systemd-journald.service... Aug 13 00:04:09.085687 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:04:09.085694 systemd[1]: Starting systemd-resolved.service... Aug 13 00:04:09.085702 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:04:09.085714 systemd-journald[276]: Journal started Aug 13 00:04:09.085760 systemd-journald[276]: Runtime Journal (/run/log/journal/b6a70705e7fa4d06835c63d0f3a18f46) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:04:09.075646 systemd-modules-load[277]: Inserted module 'overlay' Aug 13 00:04:09.119147 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:04:09.119207 systemd[1]: Started systemd-journald.service. Aug 13 00:04:09.125395 systemd-resolved[278]: Positive Trust Anchors: Aug 13 00:04:09.132643 kernel: Bridge firewalling registered Aug 13 00:04:09.125418 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:04:09.164507 kernel: audit: type=1130 audit(1755043449.140:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.125448 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:04:09.258847 kernel: SCSI subsystem initialized Aug 13 00:04:09.258876 kernel: audit: type=1130 audit(1755043449.168:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.258887 kernel: audit: type=1130 audit(1755043449.177:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.258896 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:04:09.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.127757 systemd-resolved[278]: Defaulting to hostname 'linux'. Aug 13 00:04:09.277020 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:04:09.277046 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:04:09.136979 systemd-modules-load[277]: Inserted module 'br_netfilter' Aug 13 00:04:09.140567 systemd[1]: Started systemd-resolved.service. Aug 13 00:04:09.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.169305 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:04:09.317259 kernel: audit: type=1130 audit(1755043449.288:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.177895 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:04:09.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.277253 systemd-modules-load[277]: Inserted module 'dm_multipath' Aug 13 00:04:09.375239 kernel: audit: type=1130 audit(1755043449.322:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.375266 kernel: audit: type=1130 audit(1755043449.349:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.312713 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:04:09.323590 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:04:09.349984 systemd[1]: Reached target nss-lookup.target. Aug 13 00:04:09.385407 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:04:09.401767 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:04:09.420096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:04:09.432902 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:04:09.466232 kernel: audit: type=1130 audit(1755043449.441:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.442161 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:04:09.466687 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:04:09.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.494118 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:04:09.526394 kernel: audit: type=1130 audit(1755043449.466:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.526420 kernel: audit: type=1130 audit(1755043449.492:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.526505 dracut-cmdline[298]: dracut-dracut-053 Aug 13 00:04:09.531525 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:04:09.598167 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:04:09.614149 kernel: iscsi: registered transport (tcp) Aug 13 00:04:09.636238 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:04:09.636310 kernel: QLogic iSCSI HBA Driver Aug 13 00:04:09.674990 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:04:09.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:09.680916 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:04:09.740145 kernel: raid6: neonx8 gen() 13794 MB/s Aug 13 00:04:09.761145 kernel: raid6: neonx8 xor() 10783 MB/s Aug 13 00:04:09.798136 kernel: raid6: neonx4 gen() 13513 MB/s Aug 13 00:04:09.808165 kernel: raid6: neonx4 xor() 10862 MB/s Aug 13 00:04:09.825169 kernel: raid6: neonx2 gen() 12944 MB/s Aug 13 00:04:09.845143 kernel: raid6: neonx2 xor() 10291 MB/s Aug 13 00:04:09.867171 kernel: raid6: neonx1 gen() 10322 MB/s Aug 13 00:04:09.887166 kernel: raid6: neonx1 xor() 8643 MB/s Aug 13 00:04:09.907164 kernel: raid6: int64x8 gen() 6234 MB/s Aug 13 00:04:09.928168 kernel: raid6: int64x8 xor() 3542 MB/s Aug 13 00:04:09.948166 kernel: raid6: int64x4 gen() 7187 MB/s Aug 13 00:04:09.969165 kernel: raid6: int64x4 xor() 3833 MB/s Aug 13 00:04:09.990169 kernel: raid6: int64x2 gen() 6150 MB/s Aug 13 00:04:10.011168 kernel: raid6: int64x2 xor() 3296 MB/s Aug 13 00:04:10.032167 kernel: raid6: int64x1 gen() 5030 MB/s Aug 13 00:04:10.057365 kernel: raid6: int64x1 xor() 2641 MB/s Aug 13 00:04:10.057430 kernel: raid6: using algorithm neonx8 gen() 13794 MB/s Aug 13 00:04:10.057440 kernel: raid6: .... xor() 10783 MB/s, rmw enabled Aug 13 00:04:10.061625 kernel: raid6: using neon recovery algorithm Aug 13 00:04:10.080149 kernel: xor: measuring software checksum speed Aug 13 00:04:10.080203 kernel: 8regs : 15911 MB/sec Aug 13 00:04:10.088573 kernel: 32regs : 20577 MB/sec Aug 13 00:04:10.092788 kernel: arm64_neon : 27766 MB/sec Aug 13 00:04:10.092857 kernel: xor: using function: arm64_neon (27766 MB/sec) Aug 13 00:04:10.168162 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 13 00:04:10.180027 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:04:10.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:10.190000 audit: BPF prog-id=7 op=LOAD Aug 13 00:04:10.190000 audit: BPF prog-id=8 op=LOAD Aug 13 00:04:10.191307 systemd[1]: Starting systemd-udevd.service... Aug 13 00:04:10.211662 systemd-udevd[474]: Using default interface naming scheme 'v252'. Aug 13 00:04:10.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:10.218755 systemd[1]: Started systemd-udevd.service. Aug 13 00:04:10.231749 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:04:10.248562 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Aug 13 00:04:10.290533 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:04:10.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:10.298773 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:04:10.334585 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:04:10.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:10.409142 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:04:10.432798 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:04:10.432886 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 00:04:10.440149 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:04:10.456190 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:04:10.456252 kernel: scsi host0: storvsc_host_t Aug 13 00:04:10.456428 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:04:10.456451 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:04:10.466171 kernel: scsi host1: storvsc_host_t Aug 13 00:04:10.476299 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 00:04:10.486408 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:04:10.494107 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:04:10.513814 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:04:10.526501 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:04:10.526520 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:04:10.551583 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:04:10.551720 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:04:10.551809 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:04:10.551890 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:04:10.551964 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:04:10.552040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:10.552060 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:04:10.616823 kernel: hv_netvsc 0022487a-e6a6-0022-487a-e6a60022487a eth0: VF slot 1 added Aug 13 00:04:10.626807 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:04:10.636221 kernel: hv_pci d1e47166-5891-4c0d-8d0c-2bdc842a9d2c: PCI VMBus probing: Using version 0x10004 Aug 13 00:04:10.712449 kernel: hv_pci d1e47166-5891-4c0d-8d0c-2bdc842a9d2c: PCI host bridge to bus 5891:00 Aug 13 00:04:10.712546 kernel: pci_bus 5891:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:04:10.712653 kernel: pci_bus 5891:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:04:10.712725 kernel: pci 5891:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:04:10.712829 kernel: pci 5891:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:04:10.712904 kernel: pci 5891:00:02.0: enabling Extended Tags Aug 13 00:04:10.712984 kernel: pci 5891:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5891:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:04:10.713066 kernel: pci_bus 5891:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:04:10.713182 kernel: pci 5891:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:04:10.750347 kernel: mlx5_core 5891:00:02.0: enabling device (0000 -> 0002) Aug 13 00:04:11.077777 kernel: mlx5_core 5891:00:02.0: firmware version: 16.31.2424 Aug 13 00:04:11.077906 kernel: mlx5_core 5891:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Aug 13 00:04:11.077986 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (535) Aug 13 00:04:11.077996 kernel: hv_netvsc 0022487a-e6a6-0022-487a-e6a60022487a eth0: VF registering: eth1 Aug 13 00:04:11.078084 kernel: mlx5_core 5891:00:02.0 eth1: joined to eth0 Aug 13 00:04:10.946332 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:04:11.003892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:04:11.094857 kernel: mlx5_core 5891:00:02.0 enP22673s1: renamed from eth1 Aug 13 00:04:11.138406 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:04:11.156195 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:04:11.165519 systemd[1]: Starting disk-uuid.service... Aug 13 00:04:11.191199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:04:11.207948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:11.229146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:12.218149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:04:12.218287 disk-uuid[603]: The operation has completed successfully. Aug 13 00:04:12.289228 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:04:12.294430 systemd[1]: Finished disk-uuid.service. Aug 13 00:04:12.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.307009 systemd[1]: Starting verity-setup.service... Aug 13 00:04:12.365164 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:04:12.553017 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:04:12.560037 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:04:12.572800 systemd[1]: Finished verity-setup.service. Aug 13 00:04:12.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.636145 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:04:12.636500 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:04:12.640930 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:04:12.641930 systemd[1]: Starting ignition-setup.service... Aug 13 00:04:12.658827 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:04:12.685801 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:04:12.685869 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:04:12.685886 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:04:12.749695 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:04:12.751476 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:04:12.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.764000 audit: BPF prog-id=9 op=LOAD Aug 13 00:04:12.765523 systemd[1]: Starting systemd-networkd.service... Aug 13 00:04:12.792734 systemd[1]: Finished ignition-setup.service. Aug 13 00:04:12.800090 systemd-networkd[874]: lo: Link UP Aug 13 00:04:12.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.800106 systemd-networkd[874]: lo: Gained carrier Aug 13 00:04:12.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.800568 systemd-networkd[874]: Enumeration completed Aug 13 00:04:12.804493 systemd[1]: Started systemd-networkd.service. Aug 13 00:04:12.812402 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:04:12.813394 systemd[1]: Reached target network.target. Aug 13 00:04:12.826658 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:04:12.845298 systemd[1]: Starting iscsiuio.service... Aug 13 00:04:12.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.854267 systemd[1]: Started iscsiuio.service. Aug 13 00:04:12.865596 systemd[1]: Starting iscsid.service... Aug 13 00:04:12.882896 systemd[1]: Started iscsid.service. Aug 13 00:04:12.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.890722 iscsid[881]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:04:12.890722 iscsid[881]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:04:12.890722 iscsid[881]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:04:12.890722 iscsid[881]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:04:12.890722 iscsid[881]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:04:12.890722 iscsid[881]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:04:12.890722 iscsid[881]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:04:13.019468 kernel: kauditd_printk_skb: 16 callbacks suppressed Aug 13 00:04:13.019496 kernel: audit: type=1130 audit(1755043452.929:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:13.019508 kernel: mlx5_core 5891:00:02.0 enP22673s1: Link up Aug 13 00:04:12.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.888100 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:04:12.924037 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:04:12.947668 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:04:13.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:12.981619 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:04:13.078230 kernel: audit: type=1130 audit(1755043453.036:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:13.078262 kernel: hv_netvsc 0022487a-e6a6-0022-487a-e6a60022487a eth0: Data path switched to VF: enP22673s1 Aug 13 00:04:13.078436 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:04:12.994737 systemd[1]: Reached target remote-fs.target. Aug 13 00:04:13.008262 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:04:13.031365 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:04:13.067731 systemd-networkd[874]: enP22673s1: Link UP Aug 13 00:04:13.067813 systemd-networkd[874]: eth0: Link UP Aug 13 00:04:13.077937 systemd-networkd[874]: eth0: Gained carrier Aug 13 00:04:13.096726 systemd-networkd[874]: enP22673s1: Gained carrier Aug 13 00:04:13.112254 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:04:14.824286 systemd-networkd[874]: eth0: Gained IPv6LL Aug 13 00:04:15.321573 ignition[877]: Ignition 2.14.0 Aug 13 00:04:15.324351 ignition[877]: Stage: fetch-offline Aug 13 00:04:15.324497 ignition[877]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:15.324528 ignition[877]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:15.371043 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:15.371256 ignition[877]: parsed url from cmdline: "" Aug 13 00:04:15.371259 ignition[877]: no config URL provided Aug 13 00:04:15.371264 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:04:15.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.378640 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:04:15.417920 kernel: audit: type=1130 audit(1755043455.388:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.371275 ignition[877]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:04:15.411743 systemd[1]: Starting ignition-fetch.service... Aug 13 00:04:15.371281 ignition[877]: failed to fetch config: resource requires networking Aug 13 00:04:15.371801 ignition[877]: Ignition finished successfully Aug 13 00:04:15.429562 ignition[902]: Ignition 2.14.0 Aug 13 00:04:15.429568 ignition[902]: Stage: fetch Aug 13 00:04:15.429695 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:15.429716 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:15.433628 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:15.433867 ignition[902]: parsed url from cmdline: "" Aug 13 00:04:15.433871 ignition[902]: no config URL provided Aug 13 00:04:15.433877 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:04:15.433889 ignition[902]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:04:15.433927 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:04:15.517748 ignition[902]: GET result: OK Aug 13 00:04:15.517887 ignition[902]: config has been read from IMDS userdata Aug 13 00:04:15.521315 unknown[902]: fetched base config from "system" Aug 13 00:04:15.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.517915 ignition[902]: parsing config with SHA512: cbd2ba0522a1c31ec9ccb3dae558a665442fa375c80dc1768723b2cf3872074627fb706f4afd92246481c873972e3cb62e6c7075fc08c86f057c2a83ce981937 Aug 13 00:04:15.521324 unknown[902]: fetched base config from "system" Aug 13 00:04:15.568503 kernel: audit: type=1130 audit(1755043455.533:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.521806 ignition[902]: fetch: fetch complete Aug 13 00:04:15.521330 unknown[902]: fetched user config from "azure" Aug 13 00:04:15.521812 ignition[902]: fetch: fetch passed Aug 13 00:04:15.527664 systemd[1]: Finished ignition-fetch.service. Aug 13 00:04:15.521878 ignition[902]: Ignition finished successfully Aug 13 00:04:15.534947 systemd[1]: Starting ignition-kargs.service... Aug 13 00:04:15.621084 kernel: audit: type=1130 audit(1755043455.597:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.577255 ignition[908]: Ignition 2.14.0 Aug 13 00:04:15.592466 systemd[1]: Finished ignition-kargs.service. Aug 13 00:04:15.577264 ignition[908]: Stage: kargs Aug 13 00:04:15.659199 kernel: audit: type=1130 audit(1755043455.634:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.598528 systemd[1]: Starting ignition-disks.service... Aug 13 00:04:15.577426 ignition[908]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:15.630286 systemd[1]: Finished ignition-disks.service. Aug 13 00:04:15.577456 ignition[908]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:15.635260 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:04:15.580925 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:15.658957 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:04:15.587214 ignition[908]: kargs: kargs passed Aug 13 00:04:15.663739 systemd[1]: Reached target local-fs.target. Aug 13 00:04:15.587292 ignition[908]: Ignition finished successfully Aug 13 00:04:15.672840 systemd[1]: Reached target sysinit.target. Aug 13 00:04:15.609927 ignition[914]: Ignition 2.14.0 Aug 13 00:04:15.683960 systemd[1]: Reached target basic.target. Aug 13 00:04:15.609933 ignition[914]: Stage: disks Aug 13 00:04:15.693630 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:04:15.610063 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:15.610086 ignition[914]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:15.613243 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:15.626366 ignition[914]: disks: disks passed Aug 13 00:04:15.626430 ignition[914]: Ignition finished successfully Aug 13 00:04:15.807869 systemd-fsck[922]: ROOT: clean, 629/7326000 files, 481082/7359488 blocks Aug 13 00:04:15.821939 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:04:15.850554 kernel: audit: type=1130 audit(1755043455.827:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:15.850910 systemd[1]: Mounting sysroot.mount... Aug 13 00:04:15.875153 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:04:15.876119 systemd[1]: Mounted sysroot.mount. Aug 13 00:04:15.883760 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:04:15.921726 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:04:15.930743 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:04:15.935637 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:04:15.935694 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:04:15.947394 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:04:16.002232 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:04:16.007631 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:04:16.038846 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Aug 13 00:04:16.038905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:04:16.038924 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:04:16.049719 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:04:16.054368 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:04:16.059598 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:04:16.072757 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:04:16.097949 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:04:16.109680 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:04:16.514401 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:04:16.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:16.541628 systemd[1]: Starting ignition-mount.service... Aug 13 00:04:16.552656 kernel: audit: type=1130 audit(1755043456.519:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:16.551797 systemd[1]: Starting sysroot-boot.service... Aug 13 00:04:16.562594 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:04:16.562861 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:04:16.586064 ignition[1000]: INFO : Ignition 2.14.0 Aug 13 00:04:16.591093 ignition[1000]: INFO : Stage: mount Aug 13 00:04:16.591093 ignition[1000]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:16.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:16.594299 systemd[1]: Finished sysroot-boot.service. Aug 13 00:04:16.632465 kernel: audit: type=1130 audit(1755043456.602:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:16.632504 ignition[1000]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:16.632504 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:16.632504 ignition[1000]: INFO : mount: mount passed Aug 13 00:04:16.632504 ignition[1000]: INFO : Ignition finished successfully Aug 13 00:04:16.684856 kernel: audit: type=1130 audit(1755043456.637:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:16.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:16.618216 systemd[1]: Finished ignition-mount.service. Aug 13 00:04:17.191752 coreos-metadata[932]: Aug 13 00:04:17.191 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:04:17.201616 coreos-metadata[932]: Aug 13 00:04:17.194 INFO Fetch successful Aug 13 00:04:17.233200 coreos-metadata[932]: Aug 13 00:04:17.233 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:04:17.255074 coreos-metadata[932]: Aug 13 00:04:17.255 INFO Fetch successful Aug 13 00:04:17.271469 coreos-metadata[932]: Aug 13 00:04:17.271 INFO wrote hostname ci-3510.3.8-a-72bb20ad6b to /sysroot/etc/hostname Aug 13 00:04:17.281806 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:04:17.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:17.288626 systemd[1]: Starting ignition-files.service... Aug 13 00:04:17.304198 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:04:17.325160 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1013) Aug 13 00:04:17.337306 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:04:17.337344 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:04:17.341982 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:04:17.351243 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:04:17.367910 ignition[1032]: INFO : Ignition 2.14.0 Aug 13 00:04:17.367910 ignition[1032]: INFO : Stage: files Aug 13 00:04:17.380042 ignition[1032]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:17.380042 ignition[1032]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:17.380042 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:17.380042 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:04:17.380042 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:04:17.380042 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:04:17.467196 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:04:17.475818 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:04:17.487435 unknown[1032]: wrote ssh authorized keys file for user: core Aug 13 00:04:17.495520 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:04:17.495520 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:04:17.495520 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:04:17.495520 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:04:17.495520 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:04:17.495520 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:04:17.495520 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem647078660" Aug 13 00:04:17.592710 ignition[1032]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem647078660": device or resource busy Aug 13 00:04:17.592710 ignition[1032]: ERROR : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem647078660", trying btrfs: device or resource busy Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem647078660" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem647078660" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [started] unmounting "/mnt/oem647078660" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem647078660" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:04:17.592710 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem340375679" Aug 13 00:04:17.592710 ignition[1032]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem340375679": device or resource busy Aug 13 00:04:17.548416 systemd[1]: mnt-oem647078660.mount: Deactivated successfully. Aug 13 00:04:17.786240 ignition[1032]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem340375679", trying btrfs: device or resource busy Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem340375679" Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem340375679" Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem340375679" Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem340375679" Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:04:17.786240 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 13 00:04:17.602170 systemd[1]: mnt-oem340375679.mount: Deactivated successfully. Aug 13 00:04:18.603898 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Aug 13 00:04:18.874490 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:04:18.874490 ignition[1032]: INFO : files: op(f): [started] processing unit "waagent.service" Aug 13 00:04:18.874490 ignition[1032]: INFO : files: op(f): [finished] processing unit "waagent.service" Aug 13 00:04:18.874490 ignition[1032]: INFO : files: op(10): [started] processing unit "nvidia.service" Aug 13 00:04:18.874490 ignition[1032]: INFO : files: op(10): [finished] processing unit "nvidia.service" Aug 13 00:04:18.874490 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "waagent.service" Aug 13 00:04:18.961086 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 00:04:18.961113 kernel: audit: type=1130 audit(1755043458.898:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.961223 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "waagent.service" Aug 13 00:04:18.961223 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "nvidia.service" Aug 13 00:04:18.961223 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "nvidia.service" Aug 13 00:04:18.961223 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:04:18.961223 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:04:18.961223 ignition[1032]: INFO : files: files passed Aug 13 00:04:18.961223 ignition[1032]: INFO : Ignition finished successfully Aug 13 00:04:19.089137 kernel: audit: type=1130 audit(1755043458.965:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.089163 kernel: audit: type=1131 audit(1755043458.989:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.089174 kernel: audit: type=1130 audit(1755043459.039:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.888627 systemd[1]: Finished ignition-files.service. Aug 13 00:04:18.901746 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:04:19.099839 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:04:18.935272 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:04:19.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.941801 systemd[1]: Starting ignition-quench.service... Aug 13 00:04:19.172917 kernel: audit: type=1130 audit(1755043459.113:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.172946 kernel: audit: type=1131 audit(1755043459.113:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:18.953950 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:04:18.954074 systemd[1]: Finished ignition-quench.service. Aug 13 00:04:19.033430 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:04:19.039489 systemd[1]: Reached target ignition-complete.target. Aug 13 00:04:19.075842 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:04:19.101569 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:04:19.245410 kernel: audit: type=1130 audit(1755043459.219:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.101701 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:04:19.113990 systemd[1]: Reached target initrd-fs.target. Aug 13 00:04:19.168280 systemd[1]: Reached target initrd.target. Aug 13 00:04:19.177348 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:04:19.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.183270 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:04:19.319583 kernel: audit: type=1130 audit(1755043459.263:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.319619 kernel: audit: type=1131 audit(1755043459.263:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.213270 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:04:19.220807 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:04:19.255006 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:04:19.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.255109 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:04:19.264117 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:04:19.315215 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:04:19.388359 kernel: audit: type=1131 audit(1755043459.343:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.325718 systemd[1]: Stopped target timers.target. Aug 13 00:04:19.334232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:04:19.334314 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:04:19.365770 systemd[1]: Stopped target initrd.target. Aug 13 00:04:19.375102 systemd[1]: Stopped target basic.target. Aug 13 00:04:19.383622 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:04:19.393181 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:04:19.402165 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:04:19.413173 systemd[1]: Stopped target remote-fs.target. Aug 13 00:04:19.422359 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:04:19.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.431284 systemd[1]: Stopped target sysinit.target. Aug 13 00:04:19.439275 systemd[1]: Stopped target local-fs.target. Aug 13 00:04:19.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.448778 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:04:19.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.457710 systemd[1]: Stopped target swap.target. Aug 13 00:04:19.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.466254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:04:19.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.466331 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:04:19.474522 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:04:19.551440 iscsid[881]: iscsid shutting down. Aug 13 00:04:19.483482 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:04:19.565523 ignition[1071]: INFO : Ignition 2.14.0 Aug 13 00:04:19.565523 ignition[1071]: INFO : Stage: umount Aug 13 00:04:19.565523 ignition[1071]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:04:19.565523 ignition[1071]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:04:19.565523 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:04:19.565523 ignition[1071]: INFO : umount: umount passed Aug 13 00:04:19.565523 ignition[1071]: INFO : Ignition finished successfully Aug 13 00:04:19.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.483547 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:04:19.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.491729 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:04:19.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.491775 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:04:19.501538 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:04:19.501587 systemd[1]: Stopped ignition-files.service. Aug 13 00:04:19.511059 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:04:19.511115 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:04:19.521603 systemd[1]: Stopping ignition-mount.service... Aug 13 00:04:19.532001 systemd[1]: Stopping iscsid.service... Aug 13 00:04:19.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.536595 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:04:19.555802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:04:19.555905 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:04:19.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.571232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:04:19.571310 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:04:19.580424 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:04:19.580581 systemd[1]: Stopped iscsid.service. Aug 13 00:04:19.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.589396 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:04:19.589837 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:04:19.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.589941 systemd[1]: Stopped ignition-mount.service. Aug 13 00:04:19.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.620202 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:04:19.620276 systemd[1]: Stopped ignition-disks.service. Aug 13 00:04:19.629056 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:04:19.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.629109 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:04:19.871000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:04:19.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.633598 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:04:19.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.633646 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:04:19.644402 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:04:19.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.644450 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:04:19.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.654912 systemd[1]: Stopped target paths.target. Aug 13 00:04:19.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.672399 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:04:19.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.676151 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:04:19.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.683136 systemd[1]: Stopped target slices.target. Aug 13 00:04:19.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.692576 systemd[1]: Stopped target sockets.target. Aug 13 00:04:19.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.703299 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:04:19.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.703356 systemd[1]: Closed iscsid.socket. Aug 13 00:04:19.711551 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:04:19.711605 systemd[1]: Stopped ignition-setup.service. Aug 13 00:04:20.012910 kernel: hv_netvsc 0022487a-e6a6-0022-487a-e6a60022487a eth0: Data path switched from VF: enP22673s1 Aug 13 00:04:19.720520 systemd[1]: Stopping iscsiuio.service... Aug 13 00:04:19.733784 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:04:19.733915 systemd[1]: Stopped iscsiuio.service. Aug 13 00:04:19.743044 systemd[1]: Stopped target network.target. Aug 13 00:04:19.754800 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:04:19.754865 systemd[1]: Closed iscsiuio.socket. Aug 13 00:04:19.766637 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:04:19.775891 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:04:19.786243 systemd-networkd[874]: eth0: DHCPv6 lease lost Aug 13 00:04:20.049000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:04:19.787743 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:04:19.787926 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:04:19.793929 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:04:19.793968 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:04:19.803358 systemd[1]: Stopping network-cleanup.service... Aug 13 00:04:19.812089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:04:19.812237 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:04:19.817574 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:04:19.817621 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:04:19.827090 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:04:19.827187 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:04:19.832624 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:04:19.850667 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:04:19.851346 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:04:20.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:19.851500 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:04:19.860640 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:04:19.860783 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:04:19.872385 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:04:19.872501 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:04:19.885194 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:04:19.885253 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:04:19.896042 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:04:20.192660 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Aug 13 00:04:19.896080 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:04:19.901311 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:04:19.901371 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:04:19.910138 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:04:19.910193 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:04:19.920701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:04:19.920751 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:04:19.929676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:04:19.929725 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:04:19.938801 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:04:19.948706 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:04:19.948776 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:04:19.962022 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:04:19.962090 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:04:19.967432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:04:19.967486 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:04:19.977155 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:04:19.977686 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:04:19.977805 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:04:20.124757 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:04:20.124864 systemd[1]: Stopped network-cleanup.service. Aug 13 00:04:20.129761 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:04:20.141435 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:04:20.158532 systemd[1]: Switching root. Aug 13 00:04:20.193309 systemd-journald[276]: Journal stopped Aug 13 00:04:30.974937 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:04:30.974959 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:04:30.974971 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:04:30.974981 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:04:30.974989 kernel: SELinux: policy capability open_perms=1 Aug 13 00:04:30.974996 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:04:30.975005 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:04:30.975013 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:04:30.975021 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:04:30.975029 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:04:30.975037 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:04:30.975049 systemd[1]: Successfully loaded SELinux policy in 350.147ms. Aug 13 00:04:30.975059 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.445ms. Aug 13 00:04:30.975069 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:04:30.975079 systemd[1]: Detected virtualization microsoft. Aug 13 00:04:30.975090 systemd[1]: Detected architecture arm64. Aug 13 00:04:30.975099 systemd[1]: Detected first boot. Aug 13 00:04:30.975108 systemd[1]: Hostname set to . Aug 13 00:04:30.975117 systemd[1]: Initializing machine ID from random generator. Aug 13 00:04:30.975140 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:04:30.975149 kernel: kauditd_printk_skb: 41 callbacks suppressed Aug 13 00:04:30.975159 kernel: audit: type=1400 audit(1755043464.029:89): avc: denied { associate } for pid=1104 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:04:30.975170 kernel: audit: type=1300 audit(1755043464.029:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:30.975181 kernel: audit: type=1327 audit(1755043464.029:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:04:30.975190 kernel: audit: type=1400 audit(1755043464.040:90): avc: denied { associate } for pid=1104 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:04:30.975200 kernel: audit: type=1300 audit(1755043464.040:90): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:30.975209 kernel: audit: type=1307 audit(1755043464.040:90): cwd="/" Aug 13 00:04:30.975219 kernel: audit: type=1302 audit(1755043464.040:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:30.975228 kernel: audit: type=1302 audit(1755043464.040:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:30.975237 kernel: audit: type=1327 audit(1755043464.040:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:04:30.975246 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:04:30.975256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:04:30.975265 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:04:30.975276 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:04:30.975285 kernel: audit: type=1334 audit(1755043470.129:91): prog-id=12 op=LOAD Aug 13 00:04:30.975294 kernel: audit: type=1334 audit(1755043470.129:92): prog-id=3 op=UNLOAD Aug 13 00:04:30.975302 kernel: audit: type=1334 audit(1755043470.136:93): prog-id=13 op=LOAD Aug 13 00:04:30.975311 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:04:30.975320 kernel: audit: type=1334 audit(1755043470.144:94): prog-id=14 op=LOAD Aug 13 00:04:30.975329 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:04:30.975340 kernel: audit: type=1334 audit(1755043470.144:95): prog-id=4 op=UNLOAD Aug 13 00:04:30.975351 kernel: audit: type=1334 audit(1755043470.144:96): prog-id=5 op=UNLOAD Aug 13 00:04:30.975361 kernel: audit: type=1131 audit(1755043470.145:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.975369 kernel: audit: type=1334 audit(1755043470.180:98): prog-id=12 op=UNLOAD Aug 13 00:04:30.975380 kernel: audit: type=1130 audit(1755043470.208:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.975389 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:04:30.975399 kernel: audit: type=1131 audit(1755043470.208:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.975408 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:04:30.975418 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:04:30.975428 systemd[1]: Created slice system-getty.slice. Aug 13 00:04:30.975438 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:04:30.975447 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:04:30.975456 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:04:30.975466 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:04:30.975475 systemd[1]: Created slice user.slice. Aug 13 00:04:30.975485 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:04:30.975494 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:04:30.975504 systemd[1]: Set up automount boot.automount. Aug 13 00:04:30.975514 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:04:30.975524 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:04:30.975533 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:04:30.975542 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:04:30.975552 systemd[1]: Reached target integritysetup.target. Aug 13 00:04:30.975561 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:04:30.975570 systemd[1]: Reached target remote-fs.target. Aug 13 00:04:30.975582 systemd[1]: Reached target slices.target. Aug 13 00:04:30.975591 systemd[1]: Reached target swap.target. Aug 13 00:04:30.975600 systemd[1]: Reached target torcx.target. Aug 13 00:04:30.975609 systemd[1]: Reached target veritysetup.target. Aug 13 00:04:30.975619 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:04:30.975629 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:04:30.975638 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:04:30.975649 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:04:30.975659 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:04:30.975668 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:04:30.975678 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:04:30.975687 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:04:30.975696 systemd[1]: Mounting media.mount... Aug 13 00:04:30.975706 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:04:30.975717 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:04:30.975726 systemd[1]: Mounting tmp.mount... Aug 13 00:04:30.975736 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:04:30.975745 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:04:30.975755 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:04:30.975764 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:04:30.975774 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:04:30.975785 systemd[1]: Starting modprobe@drm.service... Aug 13 00:04:30.975794 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:04:30.975804 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:04:30.975814 systemd[1]: Starting modprobe@loop.service... Aug 13 00:04:30.975824 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:04:30.975834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:04:30.975843 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:04:30.975853 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:04:30.975863 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:04:30.975872 systemd[1]: Stopped systemd-journald.service. Aug 13 00:04:30.975881 kernel: loop: module loaded Aug 13 00:04:30.975891 systemd[1]: systemd-journald.service: Consumed 3.182s CPU time. Aug 13 00:04:30.975901 systemd[1]: Starting systemd-journald.service... Aug 13 00:04:30.975910 kernel: fuse: init (API version 7.34) Aug 13 00:04:30.975919 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:04:30.975929 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:04:30.975938 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:04:30.975948 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:04:30.975957 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:04:30.975966 systemd[1]: Stopped verity-setup.service. Aug 13 00:04:30.975977 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:04:30.975988 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:04:30.975998 systemd[1]: Mounted media.mount. Aug 13 00:04:30.976007 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:04:30.976016 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:04:30.976026 systemd[1]: Mounted tmp.mount. Aug 13 00:04:30.976035 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:04:30.976045 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:04:30.976055 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:04:30.976066 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:04:30.976079 systemd-journald[1211]: Journal started Aug 13 00:04:30.976129 systemd-journald[1211]: Runtime Journal (/run/log/journal/20e7873a208146d49f161b4f1ec315b8) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:04:22.172000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:04:22.823000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:04:22.823000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:04:22.823000 audit: BPF prog-id=10 op=LOAD Aug 13 00:04:22.823000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:04:22.824000 audit: BPF prog-id=11 op=LOAD Aug 13 00:04:22.824000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:04:24.029000 audit[1104]: AVC avc: denied { associate } for pid=1104 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:04:24.029000 audit[1104]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:24.029000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:04:24.040000 audit[1104]: AVC avc: denied { associate } for pid=1104 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:04:24.040000 audit[1104]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1087 pid=1104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:24.040000 audit: CWD cwd="/" Aug 13 00:04:24.040000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:24.040000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:24.040000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:04:30.129000 audit: BPF prog-id=12 op=LOAD Aug 13 00:04:30.129000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:04:30.136000 audit: BPF prog-id=13 op=LOAD Aug 13 00:04:30.144000 audit: BPF prog-id=14 op=LOAD Aug 13 00:04:30.144000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:04:30.144000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:04:30.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.180000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:04:30.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.801000 audit: BPF prog-id=15 op=LOAD Aug 13 00:04:30.802000 audit: BPF prog-id=16 op=LOAD Aug 13 00:04:30.802000 audit: BPF prog-id=17 op=LOAD Aug 13 00:04:30.802000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:04:30.802000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:04:30.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.972000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:04:30.972000 audit[1211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffff061340 a2=4000 a3=1 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:30.972000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:04:30.127273 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:04:30.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:23.985014 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:04:30.127288 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:04:23.985297 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:04:30.145290 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:04:23.985321 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:04:30.145732 systemd[1]: systemd-journald.service: Consumed 3.182s CPU time. Aug 13 00:04:23.985361 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:04:23.985371 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:04:23.985399 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:04:23.985411 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:04:23.985615 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:04:23.985646 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:04:23.985658 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:23Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:04:24.015164 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:04:24.015220 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:04:24.015242 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:04:24.015256 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:04:24.015278 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:04:24.015292 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:04:29.175026 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:04:29.175355 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:04:29.175462 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:04:29.175635 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:04:29.175691 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:04:29.175750 /usr/lib/systemd/system-generators/torcx-generator[1104]: time="2025-08-13T00:04:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:04:30.986807 systemd[1]: Started systemd-journald.service. Aug 13 00:04:30.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.987952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:30.988110 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:04:30.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.993236 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:04:30.993472 systemd[1]: Finished modprobe@drm.service. Aug 13 00:04:30.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:30.998964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:30.999299 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:04:31.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.005569 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:04:31.005748 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:04:31.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.011266 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:31.011706 systemd[1]: Finished modprobe@loop.service. Aug 13 00:04:31.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.017582 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:04:31.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.023922 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:04:31.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.029958 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:04:31.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.035663 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:04:31.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.041246 systemd[1]: Reached target network-pre.target. Aug 13 00:04:31.047353 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:04:31.053287 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:04:31.057433 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:04:31.103246 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:04:31.109205 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:04:31.114093 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:04:31.115490 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:04:31.120759 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:04:31.122110 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:04:31.128151 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:04:31.134108 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:04:31.141630 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:04:31.150899 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:04:31.163581 udevadm[1225]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:04:31.165074 systemd-journald[1211]: Time spent on flushing to /var/log/journal/20e7873a208146d49f161b4f1ec315b8 is 17.598ms for 1081 entries. Aug 13 00:04:31.165074 systemd-journald[1211]: System Journal (/var/log/journal/20e7873a208146d49f161b4f1ec315b8) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:04:31.250617 systemd-journald[1211]: Received client request to flush runtime journal. Aug 13 00:04:31.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.180818 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:04:31.186489 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:04:31.223541 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:04:31.251775 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:04:31.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.641014 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:04:31.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:31.647228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:04:31.987635 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:04:31.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:32.308197 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:04:32.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:32.314000 audit: BPF prog-id=18 op=LOAD Aug 13 00:04:32.314000 audit: BPF prog-id=19 op=LOAD Aug 13 00:04:32.314000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:04:32.314000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:04:32.315578 systemd[1]: Starting systemd-udevd.service... Aug 13 00:04:32.335164 systemd-udevd[1230]: Using default interface naming scheme 'v252'. Aug 13 00:04:32.566836 systemd[1]: Started systemd-udevd.service. Aug 13 00:04:32.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:32.576000 audit: BPF prog-id=20 op=LOAD Aug 13 00:04:32.579062 systemd[1]: Starting systemd-networkd.service... Aug 13 00:04:32.624000 audit: BPF prog-id=21 op=LOAD Aug 13 00:04:32.624000 audit: BPF prog-id=22 op=LOAD Aug 13 00:04:32.624000 audit: BPF prog-id=23 op=LOAD Aug 13 00:04:32.625504 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:04:32.635241 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Aug 13 00:04:32.692000 audit[1236]: AVC avc: denied { confidentiality } for pid=1236 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:04:32.709159 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:04:32.709279 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:04:32.710422 systemd[1]: Started systemd-userdbd.service. Aug 13 00:04:32.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:32.730355 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:04:32.731078 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 13 00:04:32.692000 audit[1236]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaabf42ffa0 a1=aa2c a2=ffffab4124b0 a3=aaaabf390010 items=12 ppid=1230 pid=1236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:32.692000 audit: CWD cwd="/" Aug 13 00:04:32.692000 audit: PATH item=0 name=(null) inode=5609 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=1 name=(null) inode=10131 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=2 name=(null) inode=10131 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=3 name=(null) inode=10132 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=4 name=(null) inode=10131 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=5 name=(null) inode=10133 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=6 name=(null) inode=10131 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=7 name=(null) inode=10134 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=8 name=(null) inode=10131 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=9 name=(null) inode=10135 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=10 name=(null) inode=10131 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PATH item=11 name=(null) inode=10136 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:04:32.692000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:04:32.769119 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:04:32.769274 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:04:32.787152 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:04:32.787314 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:04:32.792421 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:04:32.809350 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:04:32.828769 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:04:32.828924 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:04:32.839001 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:04:32.842438 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:04:32.984885 systemd-networkd[1251]: lo: Link UP Aug 13 00:04:32.984899 systemd-networkd[1251]: lo: Gained carrier Aug 13 00:04:32.985377 systemd-networkd[1251]: Enumeration completed Aug 13 00:04:32.985497 systemd[1]: Started systemd-networkd.service. Aug 13 00:04:32.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:32.993050 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:04:33.018438 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:04:33.035162 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:04:33.046983 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:04:33.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:33.054495 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:04:33.080341 kernel: mlx5_core 5891:00:02.0 enP22673s1: Link up Aug 13 00:04:33.130361 kernel: hv_netvsc 0022487a-e6a6-0022-487a-e6a60022487a eth0: Data path switched to VF: enP22673s1 Aug 13 00:04:33.132529 systemd-networkd[1251]: enP22673s1: Link UP Aug 13 00:04:33.133195 systemd-networkd[1251]: eth0: Link UP Aug 13 00:04:33.133368 systemd-networkd[1251]: eth0: Gained carrier Aug 13 00:04:33.145175 systemd-networkd[1251]: enP22673s1: Gained carrier Aug 13 00:04:33.163478 systemd-networkd[1251]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:04:33.275392 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:04:33.314380 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:04:33.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:33.319736 systemd[1]: Reached target cryptsetup.target. Aug 13 00:04:33.326501 systemd[1]: Starting lvm2-activation.service... Aug 13 00:04:33.331658 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:04:33.355585 systemd[1]: Finished lvm2-activation.service. Aug 13 00:04:33.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:33.361073 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:04:33.366143 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:04:33.366184 systemd[1]: Reached target local-fs.target. Aug 13 00:04:33.371151 systemd[1]: Reached target machines.target. Aug 13 00:04:33.377542 systemd[1]: Starting ldconfig.service... Aug 13 00:04:33.396200 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:04:33.396287 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:33.397740 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:04:33.404637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:04:33.414767 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:04:33.423306 systemd[1]: Starting systemd-sysext.service... Aug 13 00:04:33.442679 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1310 (bootctl) Aug 13 00:04:33.444224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:04:33.532095 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:04:33.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:33.580451 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:04:33.802975 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:04:33.803200 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:04:33.849383 kernel: loop0: detected capacity change from 0 to 211168 Aug 13 00:04:33.905350 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:04:33.911938 systemd-fsck[1317]: fsck.fat 4.2 (2021-01-31) Aug 13 00:04:33.911938 systemd-fsck[1317]: /dev/sda1: 236 files, 117307/258078 clusters Aug 13 00:04:33.914993 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:04:33.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:33.930603 kernel: loop1: detected capacity change from 0 to 211168 Aug 13 00:04:33.924246 systemd[1]: Mounting boot.mount... Aug 13 00:04:33.938889 systemd[1]: Mounted boot.mount. Aug 13 00:04:33.948355 (sd-sysext)[1322]: Using extensions 'kubernetes'. Aug 13 00:04:33.950859 (sd-sysext)[1322]: Merged extensions into '/usr'. Aug 13 00:04:33.954536 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:04:33.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:33.974098 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:04:33.978778 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:04:33.980700 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:04:33.986935 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:04:33.992976 systemd[1]: Starting modprobe@loop.service... Aug 13 00:04:33.997435 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:04:33.997600 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:33.999718 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:04:34.001606 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:04:34.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.007696 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:04:34.012021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:34.012172 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:04:34.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.017291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:34.017549 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:04:34.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.023061 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:34.023202 systemd[1]: Finished modprobe@loop.service. Aug 13 00:04:34.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.028835 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:04:34.028944 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.030261 systemd[1]: Finished systemd-sysext.service. Aug 13 00:04:34.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.036887 systemd[1]: Starting ensure-sysext.service... Aug 13 00:04:34.042653 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:04:34.054752 systemd[1]: Reloading. Aug 13 00:04:34.058409 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:04:34.094385 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:04:34.110170 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:04:34.111654 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2025-08-13T00:04:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:04:34.112140 /usr/lib/systemd/system-generators/torcx-generator[1353]: time="2025-08-13T00:04:34Z" level=info msg="torcx already run" Aug 13 00:04:34.211095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:04:34.211123 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:04:34.227011 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:04:34.292000 audit: BPF prog-id=24 op=LOAD Aug 13 00:04:34.292000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:04:34.293000 audit: BPF prog-id=25 op=LOAD Aug 13 00:04:34.293000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:04:34.293000 audit: BPF prog-id=26 op=LOAD Aug 13 00:04:34.293000 audit: BPF prog-id=27 op=LOAD Aug 13 00:04:34.293000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:04:34.293000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:04:34.295000 audit: BPF prog-id=28 op=LOAD Aug 13 00:04:34.295000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:04:34.295000 audit: BPF prog-id=29 op=LOAD Aug 13 00:04:34.295000 audit: BPF prog-id=30 op=LOAD Aug 13 00:04:34.295000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:04:34.295000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:04:34.295000 audit: BPF prog-id=31 op=LOAD Aug 13 00:04:34.295000 audit: BPF prog-id=32 op=LOAD Aug 13 00:04:34.295000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:04:34.295000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:04:34.315173 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.316799 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:04:34.322673 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:04:34.328678 systemd[1]: Starting modprobe@loop.service... Aug 13 00:04:34.332978 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.333137 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:34.334120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:34.334304 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:04:34.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.339048 systemd-networkd[1251]: eth0: Gained IPv6LL Aug 13 00:04:34.340177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:34.340367 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:04:34.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.346843 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:04:34.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.353141 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:34.353283 systemd[1]: Finished modprobe@loop.service. Aug 13 00:04:34.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.360073 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.361658 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:04:34.367840 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:04:34.374003 systemd[1]: Starting modprobe@loop.service... Aug 13 00:04:34.378307 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.378476 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:34.379371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:34.379549 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:04:34.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.385547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:34.385702 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:04:34.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.391057 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:34.391198 systemd[1]: Finished modprobe@loop.service. Aug 13 00:04:34.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.397049 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:04:34.397150 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.399923 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.401568 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:04:34.407445 systemd[1]: Starting modprobe@drm.service... Aug 13 00:04:34.413980 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:04:34.420693 systemd[1]: Starting modprobe@loop.service... Aug 13 00:04:34.425093 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.425252 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:34.426248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:04:34.426445 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:04:34.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.432414 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:04:34.432554 systemd[1]: Finished modprobe@drm.service. Aug 13 00:04:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.437957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:04:34.438094 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:04:34.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.444252 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:04:34.444420 systemd[1]: Finished modprobe@loop.service. Aug 13 00:04:34.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.450080 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:04:34.450165 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:04:34.451665 systemd[1]: Finished ensure-sysext.service. Aug 13 00:04:34.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.615888 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:04:34.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.623170 systemd[1]: Starting audit-rules.service... Aug 13 00:04:34.629446 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:04:34.636505 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:04:34.643000 audit: BPF prog-id=33 op=LOAD Aug 13 00:04:34.644820 systemd[1]: Starting systemd-resolved.service... Aug 13 00:04:34.651000 audit: BPF prog-id=34 op=LOAD Aug 13 00:04:34.652833 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:04:34.658757 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:04:34.691000 audit[1429]: SYSTEM_BOOT pid=1429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.695766 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:04:34.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.705883 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:04:34.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.711033 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:04:34.809670 systemd-resolved[1427]: Positive Trust Anchors: Aug 13 00:04:34.810047 systemd-resolved[1427]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:04:34.810137 systemd-resolved[1427]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:04:34.813786 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:04:34.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.818942 systemd[1]: Reached target time-set.target. Aug 13 00:04:34.856858 systemd-resolved[1427]: Using system hostname 'ci-3510.3.8-a-72bb20ad6b'. Aug 13 00:04:34.858842 systemd[1]: Started systemd-resolved.service. Aug 13 00:04:34.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:34.864560 systemd[1]: Reached target network.target. Aug 13 00:04:34.869479 systemd[1]: Reached target network-online.target. Aug 13 00:04:34.875069 systemd[1]: Reached target nss-lookup.target. Aug 13 00:04:34.895725 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:04:34.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:04:35.019000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:04:35.019000 audit[1444]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd83a97e0 a2=420 a3=0 items=0 ppid=1423 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:04:35.019000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:04:35.033921 augenrules[1444]: No rules Aug 13 00:04:35.035054 systemd[1]: Finished audit-rules.service. Aug 13 00:04:35.042905 systemd-timesyncd[1428]: Contacted time server 64.227.104.228:123 (0.flatcar.pool.ntp.org). Aug 13 00:04:35.042985 systemd-timesyncd[1428]: Initial clock synchronization to Wed 2025-08-13 00:04:35.042419 UTC. Aug 13 00:04:40.819078 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:04:40.833052 systemd[1]: Finished ldconfig.service. Aug 13 00:04:40.840188 systemd[1]: Starting systemd-update-done.service... Aug 13 00:04:40.863525 systemd[1]: Finished systemd-update-done.service. Aug 13 00:04:40.869420 systemd[1]: Reached target sysinit.target. Aug 13 00:04:40.874258 systemd[1]: Started motdgen.path. Aug 13 00:04:40.878085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:04:40.884693 systemd[1]: Started logrotate.timer. Aug 13 00:04:40.888803 systemd[1]: Started mdadm.timer. Aug 13 00:04:40.892910 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:04:40.897870 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:04:40.897908 systemd[1]: Reached target paths.target. Aug 13 00:04:40.902544 systemd[1]: Reached target timers.target. Aug 13 00:04:40.908953 systemd[1]: Listening on dbus.socket. Aug 13 00:04:40.915379 systemd[1]: Starting docker.socket... Aug 13 00:04:40.923797 systemd[1]: Listening on sshd.socket. Aug 13 00:04:40.929898 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:40.930853 systemd[1]: Listening on docker.socket. Aug 13 00:04:40.936109 systemd[1]: Reached target sockets.target. Aug 13 00:04:40.941466 systemd[1]: Reached target basic.target. Aug 13 00:04:40.946612 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:04:40.946644 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:04:40.948387 systemd[1]: Starting containerd.service... Aug 13 00:04:40.954955 systemd[1]: Starting dbus.service... Aug 13 00:04:40.961648 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:04:40.969069 systemd[1]: Starting extend-filesystems.service... Aug 13 00:04:40.973921 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:04:40.989440 systemd[1]: Starting kubelet.service... Aug 13 00:04:40.995513 systemd[1]: Starting motdgen.service... Aug 13 00:04:41.000943 systemd[1]: Started nvidia.service. Aug 13 00:04:41.006516 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:04:41.018083 systemd[1]: Starting sshd-keygen.service... Aug 13 00:04:41.030172 jq[1454]: false Aug 13 00:04:41.027371 systemd[1]: Starting systemd-logind.service... Aug 13 00:04:41.032720 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:04:41.032809 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:04:41.034454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:04:41.035439 systemd[1]: Starting update-engine.service... Aug 13 00:04:41.040892 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:04:41.049591 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:04:41.049797 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:04:41.052832 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:04:41.053031 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:04:41.063553 jq[1472]: true Aug 13 00:04:41.074247 extend-filesystems[1455]: Found loop1 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda1 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda2 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda3 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found usr Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda4 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda6 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda7 Aug 13 00:04:41.078903 extend-filesystems[1455]: Found sda9 Aug 13 00:04:41.078903 extend-filesystems[1455]: Checking size of /dev/sda9 Aug 13 00:04:41.130870 jq[1475]: true Aug 13 00:04:41.138784 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:04:41.138986 systemd[1]: Finished motdgen.service. Aug 13 00:04:41.144728 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:04:41.145569 systemd-logind[1466]: New seat seat0. Aug 13 00:04:41.193979 extend-filesystems[1455]: Old size kept for /dev/sda9 Aug 13 00:04:41.218850 extend-filesystems[1455]: Found sr0 Aug 13 00:04:41.208967 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:04:41.209176 systemd[1]: Finished extend-filesystems.service. Aug 13 00:04:41.227677 env[1486]: time="2025-08-13T00:04:41.227623798Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:04:41.258887 env[1486]: time="2025-08-13T00:04:41.258826616Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:04:41.260853 env[1486]: time="2025-08-13T00:04:41.260809424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:41.262572 env[1486]: time="2025-08-13T00:04:41.262522637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:41.263539 env[1486]: time="2025-08-13T00:04:41.262675554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:41.264057 env[1486]: time="2025-08-13T00:04:41.264018413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:41.264183 env[1486]: time="2025-08-13T00:04:41.264166450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:41.264254 env[1486]: time="2025-08-13T00:04:41.264238689Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:04:41.264312 env[1486]: time="2025-08-13T00:04:41.264298448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:41.264505 env[1486]: time="2025-08-13T00:04:41.264485405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:41.264806 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:04:41.265303 env[1486]: time="2025-08-13T00:04:41.265274473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:04:41.265770 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:04:41.273095 env[1486]: time="2025-08-13T00:04:41.273056108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:04:41.273236 env[1486]: time="2025-08-13T00:04:41.273219745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:04:41.273459 env[1486]: time="2025-08-13T00:04:41.273438661Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:04:41.273564 env[1486]: time="2025-08-13T00:04:41.273549140Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295183672Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295246831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295263230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295310270Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295371429Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295389668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295403548Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295812222Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295834981Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295848581Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295861341Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.295875501Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.296052298Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:04:41.296443 env[1486]: time="2025-08-13T00:04:41.296129617Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.296915804Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.296977603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.296994083Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297064122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297081001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297095001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297106601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297129920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297143440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297155000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297167000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297180720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297409396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297445235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297520 env[1486]: time="2025-08-13T00:04:41.297459515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.297871 env[1486]: time="2025-08-13T00:04:41.297478275Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:04:41.298225 env[1486]: time="2025-08-13T00:04:41.297495755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:04:41.298225 env[1486]: time="2025-08-13T00:04:41.297925428Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:04:41.298225 env[1486]: time="2025-08-13T00:04:41.297957267Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:04:41.298225 env[1486]: time="2025-08-13T00:04:41.298013946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:04:41.298513 env[1486]: time="2025-08-13T00:04:41.298382860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.298765134Z" level=info msg="Connect containerd service" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.298830893Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.299759718Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.299891716Z" level=info msg="Start subscribing containerd event" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.299935555Z" level=info msg="Start recovering state" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.300019514Z" level=info msg="Start event monitor" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.300041354Z" level=info msg="Start snapshots syncer" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.300050953Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.300059993Z" level=info msg="Start streaming server" Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.300601505Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.300682583Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:04:41.320128 env[1486]: time="2025-08-13T00:04:41.306085576Z" level=info msg="containerd successfully booted in 0.098094s" Aug 13 00:04:41.300842 systemd[1]: Started containerd.service. Aug 13 00:04:41.341695 dbus-daemon[1453]: [system] SELinux support is enabled Aug 13 00:04:41.341936 systemd[1]: Started dbus.service. Aug 13 00:04:41.348616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:04:41.348661 systemd[1]: Reached target system-config.target. Aug 13 00:04:41.356278 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:04:41.356309 systemd[1]: Reached target user-config.target. Aug 13 00:04:41.364409 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 00:04:41.367130 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:04:41.367503 systemd[1]: Started systemd-logind.service. Aug 13 00:04:41.718459 update_engine[1471]: I0813 00:04:41.696136 1471 main.cc:92] Flatcar Update Engine starting Aug 13 00:04:41.764713 systemd[1]: Started update-engine.service. Aug 13 00:04:41.771995 update_engine[1471]: I0813 00:04:41.764744 1471 update_check_scheduler.cc:74] Next update check in 9m57s Aug 13 00:04:41.773801 systemd[1]: Started locksmithd.service. Aug 13 00:04:42.060575 systemd[1]: Started kubelet.service. Aug 13 00:04:42.539145 kubelet[1558]: E0813 00:04:42.539083 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:04:42.541133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:04:42.541277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:04:42.668261 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:04:42.689256 systemd[1]: Finished sshd-keygen.service. Aug 13 00:04:42.696231 systemd[1]: Starting issuegen.service... Aug 13 00:04:42.702102 systemd[1]: Started waagent.service. Aug 13 00:04:42.707869 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:04:42.708067 systemd[1]: Finished issuegen.service. Aug 13 00:04:42.714562 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:04:42.749844 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:04:42.757715 systemd[1]: Started getty@tty1.service. Aug 13 00:04:42.764270 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 13 00:04:42.771645 systemd[1]: Reached target getty.target. Aug 13 00:04:42.776418 systemd[1]: Reached target multi-user.target. Aug 13 00:04:42.783140 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:04:42.795894 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:04:42.796117 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:04:42.803435 systemd[1]: Startup finished in 843ms (kernel) + 13.033s (initrd) + 21.115s (userspace) = 34.993s. Aug 13 00:04:43.070851 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:04:43.411932 login[1580]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Aug 13 00:04:43.413495 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:04:43.460351 systemd[1]: Created slice user-500.slice. Aug 13 00:04:43.461663 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:04:43.466781 systemd-logind[1466]: New session 2 of user core. Aug 13 00:04:43.501741 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:04:43.503530 systemd[1]: Starting user@500.service... Aug 13 00:04:43.534929 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:43.750988 systemd[1585]: Queued start job for default target default.target. Aug 13 00:04:43.752568 systemd[1585]: Reached target paths.target. Aug 13 00:04:43.752799 systemd[1585]: Reached target sockets.target. Aug 13 00:04:43.752888 systemd[1585]: Reached target timers.target. Aug 13 00:04:43.752965 systemd[1585]: Reached target basic.target. Aug 13 00:04:43.753102 systemd[1585]: Reached target default.target. Aug 13 00:04:43.753205 systemd[1]: Started user@500.service. Aug 13 00:04:43.754009 systemd[1585]: Startup finished in 211ms. Aug 13 00:04:43.754342 systemd[1]: Started session-2.scope. Aug 13 00:04:44.412876 login[1580]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:04:44.421428 systemd[1]: Started session-1.scope. Aug 13 00:04:44.422021 systemd-logind[1466]: New session 1 of user core. Aug 13 00:04:48.251662 waagent[1577]: 2025-08-13T00:04:48.251523Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Aug 13 00:04:48.259976 waagent[1577]: 2025-08-13T00:04:48.259860Z INFO Daemon Daemon OS: flatcar 3510.3.8 Aug 13 00:04:48.266469 waagent[1577]: 2025-08-13T00:04:48.266350Z INFO Daemon Daemon Python: 3.9.16 Aug 13 00:04:48.271669 waagent[1577]: 2025-08-13T00:04:48.271530Z INFO Daemon Daemon Run daemon Aug 13 00:04:48.276395 waagent[1577]: 2025-08-13T00:04:48.276287Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Aug 13 00:04:48.295646 waagent[1577]: 2025-08-13T00:04:48.295484Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:04:48.312671 waagent[1577]: 2025-08-13T00:04:48.312511Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:04:48.323459 waagent[1577]: 2025-08-13T00:04:48.323344Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:04:48.329770 waagent[1577]: 2025-08-13T00:04:48.329647Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:04:48.336313 waagent[1577]: 2025-08-13T00:04:48.336208Z INFO Daemon Daemon Activate resource disk Aug 13 00:04:48.341628 waagent[1577]: 2025-08-13T00:04:48.341522Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:04:48.356640 waagent[1577]: 2025-08-13T00:04:48.356533Z INFO Daemon Daemon Found device: None Aug 13 00:04:48.361781 waagent[1577]: 2025-08-13T00:04:48.361668Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:04:48.371451 waagent[1577]: 2025-08-13T00:04:48.371310Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:04:48.384575 waagent[1577]: 2025-08-13T00:04:48.384481Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:04:48.391050 waagent[1577]: 2025-08-13T00:04:48.390944Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:04:48.406222 waagent[1577]: 2025-08-13T00:04:48.406041Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:04:48.423923 waagent[1577]: 2025-08-13T00:04:48.423764Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:04:48.434679 waagent[1577]: 2025-08-13T00:04:48.434571Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:04:48.440694 waagent[1577]: 2025-08-13T00:04:48.440581Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:04:48.491419 waagent[1577]: 2025-08-13T00:04:48.491225Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:04:48.559217 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:04:48.596479 waagent[1577]: 2025-08-13T00:04:48.596290Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:04:48.601936 waagent[1577]: 2025-08-13T00:04:48.601823Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:04:48.609094 waagent[1577]: 2025-08-13T00:04:48.608985Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:04:48.616509 waagent[1577]: 2025-08-13T00:04:48.616401Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:04:48.622707 waagent[1577]: 2025-08-13T00:04:48.622598Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:04:48.628591 waagent[1577]: 2025-08-13T00:04:48.628485Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:04:48.727964 waagent[1577]: 2025-08-13T00:04:48.727871Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:04:48.735753 waagent[1577]: 2025-08-13T00:04:48.735696Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:04:48.741516 waagent[1577]: 2025-08-13T00:04:48.741412Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:04:49.424897 waagent[1577]: 2025-08-13T00:04:49.424702Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:04:49.444013 waagent[1577]: 2025-08-13T00:04:49.443892Z INFO Daemon Daemon Forcing an update of the goal state.. Aug 13 00:04:49.450221 waagent[1577]: 2025-08-13T00:04:49.450114Z INFO Daemon Daemon Fetching goal state [incarnation 1] Aug 13 00:04:49.549812 waagent[1577]: 2025-08-13T00:04:49.549638Z INFO Daemon Daemon Found private key matching thumbprint 9B118ADDB837DE4E5459632B4FE55B5CD54E9465 Aug 13 00:04:49.559028 waagent[1577]: 2025-08-13T00:04:49.558915Z INFO Daemon Daemon Certificate with thumbprint B7C95F43CD520C91F08C125C2738E29332678DDC has no matching private key. Aug 13 00:04:49.569447 waagent[1577]: 2025-08-13T00:04:49.569260Z INFO Daemon Daemon Fetch goal state completed Aug 13 00:04:49.597095 waagent[1577]: 2025-08-13T00:04:49.597016Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: be67824c-f95f-4184-805f-a4c315af7dbe New eTag: 64542204621655160] Aug 13 00:04:49.608158 waagent[1577]: 2025-08-13T00:04:49.608056Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:04:49.624984 waagent[1577]: 2025-08-13T00:04:49.624881Z INFO Daemon Daemon Starting provisioning Aug 13 00:04:49.630584 waagent[1577]: 2025-08-13T00:04:49.630433Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:04:49.635754 waagent[1577]: 2025-08-13T00:04:49.635654Z INFO Daemon Daemon Set hostname [ci-3510.3.8-a-72bb20ad6b] Aug 13 00:04:49.673174 waagent[1577]: 2025-08-13T00:04:49.673005Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-a-72bb20ad6b] Aug 13 00:04:49.680762 waagent[1577]: 2025-08-13T00:04:49.680610Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:04:49.688690 waagent[1577]: 2025-08-13T00:04:49.688594Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:04:49.708037 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Aug 13 00:04:49.708220 systemd[1]: Stopped systemd-networkd-wait-online.service. Aug 13 00:04:49.708282 systemd[1]: Stopping systemd-networkd-wait-online.service... Aug 13 00:04:49.708599 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:04:49.712389 systemd-networkd[1251]: eth0: DHCPv6 lease lost Aug 13 00:04:49.714406 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:04:49.714625 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:04:49.717676 systemd[1]: Starting systemd-networkd.service... Aug 13 00:04:49.749101 systemd-networkd[1633]: enP22673s1: Link UP Aug 13 00:04:49.749113 systemd-networkd[1633]: enP22673s1: Gained carrier Aug 13 00:04:49.750227 systemd-networkd[1633]: eth0: Link UP Aug 13 00:04:49.750238 systemd-networkd[1633]: eth0: Gained carrier Aug 13 00:04:49.750629 systemd-networkd[1633]: lo: Link UP Aug 13 00:04:49.750640 systemd-networkd[1633]: lo: Gained carrier Aug 13 00:04:49.750897 systemd-networkd[1633]: eth0: Gained IPv6LL Aug 13 00:04:49.751136 systemd-networkd[1633]: Enumeration completed Aug 13 00:04:49.751985 systemd[1]: Started systemd-networkd.service. Aug 13 00:04:49.753110 systemd-networkd[1633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:04:49.754502 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:04:49.758634 waagent[1577]: 2025-08-13T00:04:49.758418Z INFO Daemon Daemon Create user account if not exists Aug 13 00:04:49.765919 waagent[1577]: 2025-08-13T00:04:49.765808Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:04:49.772882 waagent[1577]: 2025-08-13T00:04:49.772766Z INFO Daemon Daemon Configure sudoer Aug 13 00:04:49.778471 systemd-networkd[1633]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:04:49.779836 waagent[1577]: 2025-08-13T00:04:49.779609Z INFO Daemon Daemon Configure sshd Aug 13 00:04:49.785479 waagent[1577]: 2025-08-13T00:04:49.785351Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:04:49.791061 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:04:50.976842 waagent[1577]: 2025-08-13T00:04:50.976759Z INFO Daemon Daemon Provisioning complete Aug 13 00:04:50.996309 waagent[1577]: 2025-08-13T00:04:50.996222Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:04:51.003222 waagent[1577]: 2025-08-13T00:04:51.003109Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:04:51.015013 waagent[1577]: 2025-08-13T00:04:51.014891Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Aug 13 00:04:51.358265 waagent[1642]: 2025-08-13T00:04:51.358083Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Aug 13 00:04:51.359688 waagent[1642]: 2025-08-13T00:04:51.359598Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:04:51.360017 waagent[1642]: 2025-08-13T00:04:51.359961Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:04:51.375598 waagent[1642]: 2025-08-13T00:04:51.375468Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Aug 13 00:04:51.376064 waagent[1642]: 2025-08-13T00:04:51.375998Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Aug 13 00:04:51.472288 waagent[1642]: 2025-08-13T00:04:51.472104Z INFO ExtHandler ExtHandler Found private key matching thumbprint 9B118ADDB837DE4E5459632B4FE55B5CD54E9465 Aug 13 00:04:51.472808 waagent[1642]: 2025-08-13T00:04:51.472736Z INFO ExtHandler ExtHandler Certificate with thumbprint B7C95F43CD520C91F08C125C2738E29332678DDC has no matching private key. Aug 13 00:04:51.473202 waagent[1642]: 2025-08-13T00:04:51.473130Z INFO ExtHandler ExtHandler Fetch goal state completed Aug 13 00:04:51.490497 waagent[1642]: 2025-08-13T00:04:51.490425Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: b9c99688-4ccc-43ba-b6cb-d9fffce0ccb8 New eTag: 64542204621655160] Aug 13 00:04:51.491477 waagent[1642]: 2025-08-13T00:04:51.491390Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:04:51.609297 waagent[1642]: 2025-08-13T00:04:51.608988Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:04:51.623386 waagent[1642]: 2025-08-13T00:04:51.623237Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1642 Aug 13 00:04:51.627939 waagent[1642]: 2025-08-13T00:04:51.627828Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:04:51.629690 waagent[1642]: 2025-08-13T00:04:51.629576Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:04:51.749029 waagent[1642]: 2025-08-13T00:04:51.748959Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:04:51.749780 waagent[1642]: 2025-08-13T00:04:51.749703Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:04:51.760601 waagent[1642]: 2025-08-13T00:04:51.760526Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:04:51.761572 waagent[1642]: 2025-08-13T00:04:51.761477Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:04:51.763148 waagent[1642]: 2025-08-13T00:04:51.763059Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Aug 13 00:04:51.765133 waagent[1642]: 2025-08-13T00:04:51.765036Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:04:51.765516 waagent[1642]: 2025-08-13T00:04:51.765422Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:04:51.766461 waagent[1642]: 2025-08-13T00:04:51.766369Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:04:51.767199 waagent[1642]: 2025-08-13T00:04:51.767116Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:04:51.767667 waagent[1642]: 2025-08-13T00:04:51.767583Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:04:51.767667 waagent[1642]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:04:51.767667 waagent[1642]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:04:51.767667 waagent[1642]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:04:51.767667 waagent[1642]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:04:51.767667 waagent[1642]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:04:51.767667 waagent[1642]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:04:51.770851 waagent[1642]: 2025-08-13T00:04:51.770602Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:04:51.771276 waagent[1642]: 2025-08-13T00:04:51.771179Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:04:51.771807 waagent[1642]: 2025-08-13T00:04:51.771725Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:04:51.773186 waagent[1642]: 2025-08-13T00:04:51.773069Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:04:51.773430 waagent[1642]: 2025-08-13T00:04:51.773367Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:04:51.773675 waagent[1642]: 2025-08-13T00:04:51.773614Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:04:51.774864 waagent[1642]: 2025-08-13T00:04:51.774787Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:04:51.775190 waagent[1642]: 2025-08-13T00:04:51.775079Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:04:51.776135 waagent[1642]: 2025-08-13T00:04:51.776042Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:04:51.776411 waagent[1642]: 2025-08-13T00:04:51.776289Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:04:51.776730 waagent[1642]: 2025-08-13T00:04:51.776658Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:04:51.789022 waagent[1642]: 2025-08-13T00:04:51.788931Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Aug 13 00:04:51.791338 waagent[1642]: 2025-08-13T00:04:51.791232Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:04:51.795534 waagent[1642]: 2025-08-13T00:04:51.795446Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Aug 13 00:04:51.806539 waagent[1642]: 2025-08-13T00:04:51.806426Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1633' Aug 13 00:04:51.873506 waagent[1642]: 2025-08-13T00:04:51.873358Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Aug 13 00:04:51.944305 waagent[1642]: 2025-08-13T00:04:51.944128Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:04:51.944305 waagent[1642]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:04:51.944305 waagent[1642]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:04:51.944305 waagent[1642]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:e6:a6 brd ff:ff:ff:ff:ff:ff Aug 13 00:04:51.944305 waagent[1642]: 3: enP22673s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:e6:a6 brd ff:ff:ff:ff:ff:ff\ altname enP22673p0s2 Aug 13 00:04:51.944305 waagent[1642]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:04:51.944305 waagent[1642]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:04:51.944305 waagent[1642]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:04:51.944305 waagent[1642]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:04:51.944305 waagent[1642]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:04:51.944305 waagent[1642]: 2: eth0 inet6 fe80::222:48ff:fe7a:e6a6/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:04:52.154676 waagent[1642]: 2025-08-13T00:04:52.154516Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Aug 13 00:04:52.708088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:04:52.708283 systemd[1]: Stopped kubelet.service. Aug 13 00:04:52.709920 systemd[1]: Starting kubelet.service... Aug 13 00:04:52.822181 systemd[1]: Started kubelet.service. Aug 13 00:04:52.942581 kubelet[1684]: E0813 00:04:52.942534 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:04:52.945454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:04:52.945593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:04:53.021427 waagent[1577]: 2025-08-13T00:04:53.020531Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Aug 13 00:04:53.027031 waagent[1577]: 2025-08-13T00:04:53.026950Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Aug 13 00:04:54.612812 waagent[1690]: 2025-08-13T00:04:54.612698Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Aug 13 00:04:54.614165 waagent[1690]: 2025-08-13T00:04:54.614071Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Aug 13 00:04:54.614529 waagent[1690]: 2025-08-13T00:04:54.614471Z INFO ExtHandler ExtHandler Python: 3.9.16 Aug 13 00:04:54.614786 waagent[1690]: 2025-08-13T00:04:54.614738Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Aug 13 00:04:54.633288 waagent[1690]: 2025-08-13T00:04:54.633110Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:04:54.634208 waagent[1690]: 2025-08-13T00:04:54.634121Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:04:54.634648 waagent[1690]: 2025-08-13T00:04:54.634585Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:04:54.635068 waagent[1690]: 2025-08-13T00:04:54.635000Z INFO ExtHandler ExtHandler Initializing the goal state... Aug 13 00:04:54.654067 waagent[1690]: 2025-08-13T00:04:54.653934Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:04:54.668211 waagent[1690]: 2025-08-13T00:04:54.668122Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:04:54.669912 waagent[1690]: 2025-08-13T00:04:54.669830Z INFO ExtHandler Aug 13 00:04:54.670297 waagent[1690]: 2025-08-13T00:04:54.670233Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d5ae4244-e225-4254-8535-9aa00daee3b8 eTag: 64542204621655160 source: Fabric] Aug 13 00:04:54.671448 waagent[1690]: 2025-08-13T00:04:54.671375Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:04:54.673222 waagent[1690]: 2025-08-13T00:04:54.673141Z INFO ExtHandler Aug 13 00:04:54.673606 waagent[1690]: 2025-08-13T00:04:54.673543Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:04:54.682062 waagent[1690]: 2025-08-13T00:04:54.681994Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:04:54.682989 waagent[1690]: 2025-08-13T00:04:54.682924Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:04:54.705542 waagent[1690]: 2025-08-13T00:04:54.705468Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Aug 13 00:04:54.797390 waagent[1690]: 2025-08-13T00:04:54.797194Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B7C95F43CD520C91F08C125C2738E29332678DDC', 'hasPrivateKey': False} Aug 13 00:04:54.799052 waagent[1690]: 2025-08-13T00:04:54.798960Z INFO ExtHandler Downloaded certificate {'thumbprint': '9B118ADDB837DE4E5459632B4FE55B5CD54E9465', 'hasPrivateKey': True} Aug 13 00:04:54.800709 waagent[1690]: 2025-08-13T00:04:54.800616Z INFO ExtHandler Fetch goal state from WireServer completed Aug 13 00:04:54.802014 waagent[1690]: 2025-08-13T00:04:54.801939Z INFO ExtHandler ExtHandler Goal state initialization completed. Aug 13 00:04:54.823828 waagent[1690]: 2025-08-13T00:04:54.823673Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Aug 13 00:04:54.835180 waagent[1690]: 2025-08-13T00:04:54.835032Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:04:54.840593 waagent[1690]: 2025-08-13T00:04:54.840433Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Aug 13 00:04:54.841074 waagent[1690]: 2025-08-13T00:04:54.841010Z INFO ExtHandler ExtHandler Checking state of the firewall Aug 13 00:04:54.900674 waagent[1690]: 2025-08-13T00:04:54.900442Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Aug 13 00:04:54.900674 waagent[1690]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:54.900674 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:54.900674 waagent[1690]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:54.900674 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:54.900674 waagent[1690]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:54.900674 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:54.900674 waagent[1690]: 54 7811 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:04:54.902669 waagent[1690]: 2025-08-13T00:04:54.902561Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Aug 13 00:04:54.907142 waagent[1690]: 2025-08-13T00:04:54.906948Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Aug 13 00:04:54.907813 waagent[1690]: 2025-08-13T00:04:54.907741Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:04:54.908498 waagent[1690]: 2025-08-13T00:04:54.908423Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:04:54.918553 waagent[1690]: 2025-08-13T00:04:54.918481Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:04:54.919477 waagent[1690]: 2025-08-13T00:04:54.919400Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:04:54.929112 waagent[1690]: 2025-08-13T00:04:54.929010Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1690 Aug 13 00:04:54.933178 waagent[1690]: 2025-08-13T00:04:54.933066Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:04:54.934516 waagent[1690]: 2025-08-13T00:04:54.934433Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Aug 13 00:04:54.935824 waagent[1690]: 2025-08-13T00:04:54.935749Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:04:54.939283 waagent[1690]: 2025-08-13T00:04:54.939182Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Aug 13 00:04:54.939944 waagent[1690]: 2025-08-13T00:04:54.939874Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:04:54.941853 waagent[1690]: 2025-08-13T00:04:54.941758Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:04:54.942105 waagent[1690]: 2025-08-13T00:04:54.942027Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:04:54.942419 waagent[1690]: 2025-08-13T00:04:54.942352Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:04:54.943888 waagent[1690]: 2025-08-13T00:04:54.943798Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:04:54.944378 waagent[1690]: 2025-08-13T00:04:54.944279Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:04:54.944378 waagent[1690]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:04:54.944378 waagent[1690]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:04:54.944378 waagent[1690]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:04:54.944378 waagent[1690]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:04:54.944378 waagent[1690]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:04:54.944378 waagent[1690]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:04:54.947907 waagent[1690]: 2025-08-13T00:04:54.947768Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:04:54.949112 waagent[1690]: 2025-08-13T00:04:54.949020Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:04:54.949407 waagent[1690]: 2025-08-13T00:04:54.949229Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:04:54.949918 waagent[1690]: 2025-08-13T00:04:54.949833Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:04:54.950476 waagent[1690]: 2025-08-13T00:04:54.950379Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:04:54.955061 waagent[1690]: 2025-08-13T00:04:54.954931Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:04:54.958502 waagent[1690]: 2025-08-13T00:04:54.958241Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:04:54.958936 waagent[1690]: 2025-08-13T00:04:54.958849Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:04:54.959356 waagent[1690]: 2025-08-13T00:04:54.959262Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:04:54.959796 waagent[1690]: 2025-08-13T00:04:54.959716Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:04:54.960361 waagent[1690]: 2025-08-13T00:04:54.960245Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:04:54.964632 waagent[1690]: 2025-08-13T00:04:54.964528Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:04:54.964632 waagent[1690]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:04:54.964632 waagent[1690]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:04:54.964632 waagent[1690]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:e6:a6 brd ff:ff:ff:ff:ff:ff Aug 13 00:04:54.964632 waagent[1690]: 3: enP22673s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:e6:a6 brd ff:ff:ff:ff:ff:ff\ altname enP22673p0s2 Aug 13 00:04:54.964632 waagent[1690]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:04:54.964632 waagent[1690]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:04:54.964632 waagent[1690]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:04:54.964632 waagent[1690]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:04:54.964632 waagent[1690]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:04:54.964632 waagent[1690]: 2: eth0 inet6 fe80::222:48ff:fe7a:e6a6/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:04:54.980098 waagent[1690]: 2025-08-13T00:04:54.979978Z INFO ExtHandler ExtHandler Downloading agent manifest Aug 13 00:04:55.002815 waagent[1690]: 2025-08-13T00:04:55.002705Z INFO ExtHandler ExtHandler Aug 13 00:04:55.006132 waagent[1690]: 2025-08-13T00:04:55.005894Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1e3cc0d6-014e-4866-8e68-2b44ebd9e0ac correlation af71e178-9f5d-4115-a54b-d1fe56efda30 created: 2025-08-13T00:03:26.732086Z] Aug 13 00:04:55.011082 waagent[1690]: 2025-08-13T00:04:55.010989Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:04:55.018221 waagent[1690]: 2025-08-13T00:04:55.018109Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 15 ms] Aug 13 00:04:55.028089 waagent[1690]: 2025-08-13T00:04:55.027976Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:04:55.069095 waagent[1690]: 2025-08-13T00:04:55.068934Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Aug 13 00:04:55.069095 waagent[1690]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:55.069095 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:55.069095 waagent[1690]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:55.069095 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:55.069095 waagent[1690]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:55.069095 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:55.069095 waagent[1690]: 101 16230 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:04:55.076992 waagent[1690]: 2025-08-13T00:04:55.076747Z INFO ExtHandler ExtHandler Looking for existing remote access users. Aug 13 00:04:55.083769 waagent[1690]: 2025-08-13T00:04:55.083667Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 95BC3B4B-2CC8-4F4C-909B-C38AB93E6A44;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Aug 13 00:04:55.128965 waagent[1690]: 2025-08-13T00:04:55.128810Z INFO EnvHandler ExtHandler The firewall was setup successfully: Aug 13 00:04:55.128965 waagent[1690]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:55.128965 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:55.128965 waagent[1690]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:55.128965 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:55.128965 waagent[1690]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:04:55.128965 waagent[1690]: pkts bytes target prot opt in out source destination Aug 13 00:04:55.128965 waagent[1690]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:04:55.128965 waagent[1690]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:04:55.128965 waagent[1690]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:04:55.130673 waagent[1690]: 2025-08-13T00:04:55.130611Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:05:02.958125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:05:02.958363 systemd[1]: Stopped kubelet.service. Aug 13 00:05:02.960110 systemd[1]: Starting kubelet.service... Aug 13 00:05:03.280459 systemd[1]: Started kubelet.service. Aug 13 00:05:03.334849 kubelet[1740]: E0813 00:05:03.334796 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:05:03.337786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:05:03.337977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:05:13.445261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:05:13.445957 systemd[1]: Created slice system-sshd.slice. Aug 13 00:05:13.446062 systemd[1]: Stopped kubelet.service. Aug 13 00:05:13.447953 systemd[1]: Starting kubelet.service... Aug 13 00:05:13.449367 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:51464.service. Aug 13 00:05:13.648509 systemd[1]: Started kubelet.service. Aug 13 00:05:13.691443 kubelet[1752]: E0813 00:05:13.691379 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:05:13.694020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:05:13.694156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:05:14.462585 sshd[1748]: Accepted publickey for core from 10.200.16.10 port 51464 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:14.479721 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:14.485042 systemd-logind[1466]: New session 3 of user core. Aug 13 00:05:14.485568 systemd[1]: Started session-3.scope. Aug 13 00:05:14.879003 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:51480.service. Aug 13 00:05:15.355080 sshd[1761]: Accepted publickey for core from 10.200.16.10 port 51480 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:15.356659 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:15.361058 systemd-logind[1466]: New session 4 of user core. Aug 13 00:05:15.361641 systemd[1]: Started session-4.scope. Aug 13 00:05:15.714581 sshd[1761]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:15.717674 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:51480.service: Deactivated successfully. Aug 13 00:05:15.718510 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:05:15.719090 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:05:15.720064 systemd-logind[1466]: Removed session 4. Aug 13 00:05:15.796747 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:51484.service. Aug 13 00:05:16.291541 sshd[1767]: Accepted publickey for core from 10.200.16.10 port 51484 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:16.293524 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:16.298356 systemd[1]: Started session-5.scope. Aug 13 00:05:16.299206 systemd-logind[1466]: New session 5 of user core. Aug 13 00:05:16.655192 sshd[1767]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:16.658133 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:51484.service: Deactivated successfully. Aug 13 00:05:16.658911 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:05:16.659452 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:05:16.660175 systemd-logind[1466]: Removed session 5. Aug 13 00:05:16.738680 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:51486.service. Aug 13 00:05:17.237290 sshd[1773]: Accepted publickey for core from 10.200.16.10 port 51486 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:17.238746 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:17.243424 systemd[1]: Started session-6.scope. Aug 13 00:05:17.243445 systemd-logind[1466]: New session 6 of user core. Aug 13 00:05:17.607911 sshd[1773]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:17.610827 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:05:17.611538 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:05:17.611657 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:51486.service: Deactivated successfully. Aug 13 00:05:17.612752 systemd-logind[1466]: Removed session 6. Aug 13 00:05:17.686151 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:51496.service. Aug 13 00:05:18.162574 sshd[1779]: Accepted publickey for core from 10.200.16.10 port 51496 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:18.164134 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:18.168719 systemd-logind[1466]: New session 7 of user core. Aug 13 00:05:18.169945 systemd[1]: Started session-7.scope. Aug 13 00:05:18.792340 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:05:18.792616 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:05:18.806572 systemd[1]: Starting coreos-metadata.service... Aug 13 00:05:18.896412 coreos-metadata[1786]: Aug 13 00:05:18.896 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:05:18.900055 coreos-metadata[1786]: Aug 13 00:05:18.899 INFO Fetch successful Aug 13 00:05:18.900290 coreos-metadata[1786]: Aug 13 00:05:18.900 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 00:05:18.902351 coreos-metadata[1786]: Aug 13 00:05:18.902 INFO Fetch successful Aug 13 00:05:18.902668 coreos-metadata[1786]: Aug 13 00:05:18.902 INFO Fetching http://168.63.129.16/machine/eeee9e50-68b6-4ddf-a549-39b2f5e3573c/5e5abc88%2D995a%2D4576%2D8a9d%2D33a586e62397.%5Fci%2D3510.3.8%2Da%2D72bb20ad6b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 00:05:18.904530 coreos-metadata[1786]: Aug 13 00:05:18.904 INFO Fetch successful Aug 13 00:05:18.939629 coreos-metadata[1786]: Aug 13 00:05:18.939 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:05:18.950542 coreos-metadata[1786]: Aug 13 00:05:18.950 INFO Fetch successful Aug 13 00:05:18.960741 systemd[1]: Finished coreos-metadata.service. Aug 13 00:05:19.452474 systemd[1]: Stopped kubelet.service. Aug 13 00:05:19.454993 systemd[1]: Starting kubelet.service... Aug 13 00:05:19.496624 systemd[1]: Reloading. Aug 13 00:05:19.589525 /usr/lib/systemd/system-generators/torcx-generator[1844]: time="2025-08-13T00:05:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:05:19.589953 /usr/lib/systemd/system-generators/torcx-generator[1844]: time="2025-08-13T00:05:19Z" level=info msg="torcx already run" Aug 13 00:05:19.674808 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:05:19.675013 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:05:19.693335 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:05:19.793999 systemd[1]: Started kubelet.service. Aug 13 00:05:19.795939 systemd[1]: Stopping kubelet.service... Aug 13 00:05:19.796221 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:05:19.796457 systemd[1]: Stopped kubelet.service. Aug 13 00:05:19.798506 systemd[1]: Starting kubelet.service... Aug 13 00:05:19.936823 systemd[1]: Started kubelet.service. Aug 13 00:05:19.988915 kubelet[1908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:05:19.989364 kubelet[1908]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:05:19.989423 kubelet[1908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:05:19.989578 kubelet[1908]: I0813 00:05:19.989542 1908 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:05:20.855847 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 13 00:05:21.355528 kubelet[1908]: I0813 00:05:21.355477 1908 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:05:21.355528 kubelet[1908]: I0813 00:05:21.355517 1908 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:05:21.355902 kubelet[1908]: I0813 00:05:21.355752 1908 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:05:21.380976 kubelet[1908]: I0813 00:05:21.380929 1908 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:05:21.389106 kubelet[1908]: E0813 00:05:21.389042 1908 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:05:21.389106 kubelet[1908]: I0813 00:05:21.389108 1908 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:05:21.394119 kubelet[1908]: I0813 00:05:21.394070 1908 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:05:21.394424 kubelet[1908]: I0813 00:05:21.394383 1908 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:05:21.394698 kubelet[1908]: I0813 00:05:21.394417 1908 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.200.20.21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:05:21.394698 kubelet[1908]: I0813 00:05:21.394697 1908 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:05:21.394881 kubelet[1908]: I0813 00:05:21.394708 1908 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:05:21.394881 kubelet[1908]: I0813 00:05:21.394858 1908 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:05:21.399786 kubelet[1908]: I0813 00:05:21.399747 1908 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:05:21.399977 kubelet[1908]: I0813 00:05:21.399964 1908 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:05:21.400070 kubelet[1908]: I0813 00:05:21.400060 1908 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:05:21.401964 kubelet[1908]: I0813 00:05:21.401931 1908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:05:21.402566 kubelet[1908]: E0813 00:05:21.402542 1908 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:21.402684 kubelet[1908]: E0813 00:05:21.402671 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:21.403164 kubelet[1908]: I0813 00:05:21.403137 1908 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:05:21.403832 kubelet[1908]: I0813 00:05:21.403803 1908 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:05:21.403902 kubelet[1908]: W0813 00:05:21.403885 1908 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:05:21.406210 kubelet[1908]: I0813 00:05:21.406179 1908 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:05:21.406358 kubelet[1908]: I0813 00:05:21.406243 1908 server.go:1289] "Started kubelet" Aug 13 00:05:21.407006 kubelet[1908]: I0813 00:05:21.406932 1908 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:05:21.408156 kubelet[1908]: I0813 00:05:21.408129 1908 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:05:21.409398 kubelet[1908]: I0813 00:05:21.409293 1908 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:05:21.409760 kubelet[1908]: I0813 00:05:21.409724 1908 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:05:21.414269 kubelet[1908]: E0813 00:05:21.414231 1908 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:05:21.422460 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:05:21.422803 kubelet[1908]: I0813 00:05:21.422768 1908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:05:21.425859 kubelet[1908]: E0813 00:05:21.424830 1908 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.21.185b2ac8bac470cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.21,UID:10.200.20.21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.200.20.21,},FirstTimestamp:2025-08-13 00:05:21.406202059 +0000 UTC m=+1.461631132,LastTimestamp:2025-08-13 00:05:21.406202059 +0000 UTC m=+1.461631132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.21,}" Aug 13 00:05:21.426469 kubelet[1908]: E0813 00:05:21.426436 1908 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:05:21.426738 kubelet[1908]: E0813 00:05:21.426713 1908 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.200.20.21\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:05:21.427394 kubelet[1908]: I0813 00:05:21.427370 1908 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:05:21.427731 kubelet[1908]: I0813 00:05:21.427701 1908 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:05:21.428054 kubelet[1908]: E0813 00:05:21.427997 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:21.428416 kubelet[1908]: I0813 00:05:21.428392 1908 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:05:21.428532 kubelet[1908]: I0813 00:05:21.428500 1908 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:05:21.429792 kubelet[1908]: E0813 00:05:21.429668 1908 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.21.185b2ac8bb3dfad1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.21,UID:10.200.20.21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.200.20.21,},FirstTimestamp:2025-08-13 00:05:21.414167249 +0000 UTC m=+1.469596322,LastTimestamp:2025-08-13 00:05:21.414167249 +0000 UTC m=+1.469596322,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.21,}" Aug 13 00:05:21.430557 kubelet[1908]: I0813 00:05:21.430405 1908 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:05:21.430691 kubelet[1908]: I0813 00:05:21.430586 1908 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:05:21.437006 kubelet[1908]: I0813 00:05:21.436969 1908 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:05:21.454734 kubelet[1908]: E0813 00:05:21.454629 1908 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.200.20.21.185b2ac8bd875313 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.200.20.21,UID:10.200.20.21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.200.20.21 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.200.20.21,},FirstTimestamp:2025-08-13 00:05:21.452528403 +0000 UTC m=+1.507957476,LastTimestamp:2025-08-13 00:05:21.452528403 +0000 UTC m=+1.507957476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.200.20.21,}" Aug 13 00:05:21.455327 kubelet[1908]: I0813 00:05:21.455289 1908 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:05:21.455327 kubelet[1908]: I0813 00:05:21.455313 1908 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:05:21.455434 kubelet[1908]: I0813 00:05:21.455351 1908 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:05:21.458416 kubelet[1908]: E0813 00:05:21.458374 1908 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:05:21.459489 kubelet[1908]: E0813 00:05:21.458719 1908 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.21\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Aug 13 00:05:21.461550 kubelet[1908]: I0813 00:05:21.461514 1908 policy_none.go:49] "None policy: Start" Aug 13 00:05:21.461550 kubelet[1908]: I0813 00:05:21.461551 1908 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:05:21.461701 kubelet[1908]: I0813 00:05:21.461566 1908 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:05:21.470721 systemd[1]: Created slice kubepods.slice. Aug 13 00:05:21.477002 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:05:21.480614 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:05:21.489505 kubelet[1908]: E0813 00:05:21.489452 1908 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:05:21.489861 kubelet[1908]: I0813 00:05:21.489792 1908 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:05:21.489861 kubelet[1908]: I0813 00:05:21.489811 1908 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:05:21.490756 kubelet[1908]: I0813 00:05:21.490719 1908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:05:21.492273 kubelet[1908]: E0813 00:05:21.492210 1908 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:05:21.492909 kubelet[1908]: E0813 00:05:21.492536 1908 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.21\" not found" Aug 13 00:05:21.516589 kubelet[1908]: I0813 00:05:21.516543 1908 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:05:21.518457 kubelet[1908]: I0813 00:05:21.518421 1908 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:05:21.518673 kubelet[1908]: I0813 00:05:21.518653 1908 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:05:21.518762 kubelet[1908]: I0813 00:05:21.518751 1908 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:05:21.518818 kubelet[1908]: I0813 00:05:21.518810 1908 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:05:21.518960 kubelet[1908]: E0813 00:05:21.518945 1908 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:05:21.590810 kubelet[1908]: I0813 00:05:21.590763 1908 kubelet_node_status.go:75] "Attempting to register node" node="10.200.20.21" Aug 13 00:05:21.606491 kubelet[1908]: I0813 00:05:21.606367 1908 kubelet_node_status.go:78] "Successfully registered node" node="10.200.20.21" Aug 13 00:05:21.606660 kubelet[1908]: E0813 00:05:21.606643 1908 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.200.20.21\": node \"10.200.20.21\" not found" Aug 13 00:05:21.629488 kubelet[1908]: E0813 00:05:21.629445 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:21.730695 kubelet[1908]: E0813 00:05:21.730660 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:21.831264 kubelet[1908]: E0813 00:05:21.831213 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:21.932127 kubelet[1908]: E0813 00:05:21.932031 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:22.032544 kubelet[1908]: E0813 00:05:22.032500 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:22.133025 kubelet[1908]: E0813 00:05:22.132989 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:22.233485 kubelet[1908]: E0813 00:05:22.233443 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:22.334442 kubelet[1908]: E0813 00:05:22.334390 1908 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.200.20.21\" not found" Aug 13 00:05:22.357652 kubelet[1908]: I0813 00:05:22.357616 1908 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Aug 13 00:05:22.358291 kubelet[1908]: I0813 00:05:22.357820 1908 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Aug 13 00:05:22.358291 kubelet[1908]: I0813 00:05:22.357856 1908 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Aug 13 00:05:22.403384 kubelet[1908]: I0813 00:05:22.403336 1908 apiserver.go:52] "Watching apiserver" Aug 13 00:05:22.403630 kubelet[1908]: E0813 00:05:22.403605 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:22.422108 systemd[1]: Created slice kubepods-burstable-pod0b15c753_fb50_4c5f_bf72_086bb6f0c77d.slice. Aug 13 00:05:22.437020 systemd[1]: Created slice kubepods-besteffort-pode5f8be40_94c1_4a4f_85fd_efd2cb963b73.slice. Aug 13 00:05:22.438174 kubelet[1908]: I0813 00:05:22.438121 1908 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Aug 13 00:05:22.438635 env[1486]: time="2025-08-13T00:05:22.438584876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:05:22.439064 kubelet[1908]: I0813 00:05:22.438844 1908 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Aug 13 00:05:22.441960 kubelet[1908]: I0813 00:05:22.441916 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-run\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.442462 kubelet[1908]: I0813 00:05:22.442438 1908 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:05:22.483356 sudo[1782]: pam_unix(sudo:session): session closed for user root Aug 13 00:05:22.543110 kubelet[1908]: I0813 00:05:22.542771 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-etc-cni-netd\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543110 kubelet[1908]: I0813 00:05:22.542851 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-clustermesh-secrets\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543110 kubelet[1908]: I0813 00:05:22.542870 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-config-path\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543916 kubelet[1908]: I0813 00:05:22.543480 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-net\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543916 kubelet[1908]: I0813 00:05:22.543517 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-kernel\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543916 kubelet[1908]: I0813 00:05:22.543551 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hubble-tls\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543916 kubelet[1908]: I0813 00:05:22.543568 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clntz\" (UniqueName: \"kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-kube-api-access-clntz\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.543916 kubelet[1908]: I0813 00:05:22.543589 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5f8be40-94c1-4a4f-85fd-efd2cb963b73-xtables-lock\") pod \"kube-proxy-gnc4q\" (UID: \"e5f8be40-94c1-4a4f-85fd-efd2cb963b73\") " pod="kube-system/kube-proxy-gnc4q" Aug 13 00:05:22.544119 kubelet[1908]: I0813 00:05:22.543628 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppv2\" (UniqueName: \"kubernetes.io/projected/e5f8be40-94c1-4a4f-85fd-efd2cb963b73-kube-api-access-dppv2\") pod \"kube-proxy-gnc4q\" (UID: \"e5f8be40-94c1-4a4f-85fd-efd2cb963b73\") " pod="kube-system/kube-proxy-gnc4q" Aug 13 00:05:22.544119 kubelet[1908]: I0813 00:05:22.543649 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-bpf-maps\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.544119 kubelet[1908]: I0813 00:05:22.543669 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hostproc\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.544119 kubelet[1908]: I0813 00:05:22.543684 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-cgroup\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.544119 kubelet[1908]: I0813 00:05:22.543712 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cni-path\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.544119 kubelet[1908]: I0813 00:05:22.543733 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-lib-modules\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.544248 kubelet[1908]: I0813 00:05:22.543748 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-xtables-lock\") pod \"cilium-6gqmd\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " pod="kube-system/cilium-6gqmd" Aug 13 00:05:22.544248 kubelet[1908]: I0813 00:05:22.543761 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5f8be40-94c1-4a4f-85fd-efd2cb963b73-kube-proxy\") pod \"kube-proxy-gnc4q\" (UID: \"e5f8be40-94c1-4a4f-85fd-efd2cb963b73\") " pod="kube-system/kube-proxy-gnc4q" Aug 13 00:05:22.544248 kubelet[1908]: I0813 00:05:22.543786 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5f8be40-94c1-4a4f-85fd-efd2cb963b73-lib-modules\") pod \"kube-proxy-gnc4q\" (UID: \"e5f8be40-94c1-4a4f-85fd-efd2cb963b73\") " pod="kube-system/kube-proxy-gnc4q" Aug 13 00:05:22.592503 sshd[1779]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:22.595696 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:51496.service: Deactivated successfully. Aug 13 00:05:22.596511 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:05:22.597088 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:05:22.597895 systemd-logind[1466]: Removed session 7. Aug 13 00:05:22.645505 kubelet[1908]: I0813 00:05:22.645214 1908 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:05:22.735283 env[1486]: time="2025-08-13T00:05:22.734846258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gqmd,Uid:0b15c753-fb50-4c5f-bf72-086bb6f0c77d,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:22.748514 env[1486]: time="2025-08-13T00:05:22.748458603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gnc4q,Uid:e5f8be40-94c1-4a4f-85fd-efd2cb963b73,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:23.404652 kubelet[1908]: E0813 00:05:23.404586 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:23.856335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117471503.mount: Deactivated successfully. Aug 13 00:05:23.874099 env[1486]: time="2025-08-13T00:05:23.873998381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.882966 env[1486]: time="2025-08-13T00:05:23.882906052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.885477 env[1486]: time="2025-08-13T00:05:23.885427209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.891306 env[1486]: time="2025-08-13T00:05:23.891255643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.893227 env[1486]: time="2025-08-13T00:05:23.893167281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.896096 env[1486]: time="2025-08-13T00:05:23.896048118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.898409 env[1486]: time="2025-08-13T00:05:23.898331915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.905755 env[1486]: time="2025-08-13T00:05:23.905697667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:23.957924 env[1486]: time="2025-08-13T00:05:23.957824852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:23.957924 env[1486]: time="2025-08-13T00:05:23.957880692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:23.957924 env[1486]: time="2025-08-13T00:05:23.957892572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:23.958412 env[1486]: time="2025-08-13T00:05:23.958345771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab pid=1958 runtime=io.containerd.runc.v2 Aug 13 00:05:23.970340 env[1486]: time="2025-08-13T00:05:23.970174878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:23.970509 env[1486]: time="2025-08-13T00:05:23.970368478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:23.970509 env[1486]: time="2025-08-13T00:05:23.970398838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:23.970794 env[1486]: time="2025-08-13T00:05:23.970728758Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d695c91f0b33ac2d7945c1ceaf1df4dd931e8a374cf8c4d8137b453ed8744f1d pid=1977 runtime=io.containerd.runc.v2 Aug 13 00:05:23.979143 systemd[1]: Started cri-containerd-1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab.scope. Aug 13 00:05:23.998383 systemd[1]: Started cri-containerd-d695c91f0b33ac2d7945c1ceaf1df4dd931e8a374cf8c4d8137b453ed8744f1d.scope. Aug 13 00:05:24.025584 env[1486]: time="2025-08-13T00:05:24.025536941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gqmd,Uid:0b15c753-fb50-4c5f-bf72-086bb6f0c77d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\"" Aug 13 00:05:24.028788 env[1486]: time="2025-08-13T00:05:24.028745858Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:05:24.033646 env[1486]: time="2025-08-13T00:05:24.033580893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gnc4q,Uid:e5f8be40-94c1-4a4f-85fd-efd2cb963b73,Namespace:kube-system,Attempt:0,} returns sandbox id \"d695c91f0b33ac2d7945c1ceaf1df4dd931e8a374cf8c4d8137b453ed8744f1d\"" Aug 13 00:05:24.405704 kubelet[1908]: E0813 00:05:24.405666 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:25.406799 kubelet[1908]: E0813 00:05:25.406755 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:26.407604 kubelet[1908]: E0813 00:05:26.407543 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:26.814225 update_engine[1471]: I0813 00:05:26.814126 1471 update_attempter.cc:509] Updating boot flags... Aug 13 00:05:27.408155 kubelet[1908]: E0813 00:05:27.408100 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:28.408601 kubelet[1908]: E0813 00:05:28.408537 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:29.409244 kubelet[1908]: E0813 00:05:29.409173 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:29.786818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538426811.mount: Deactivated successfully. Aug 13 00:05:30.410141 kubelet[1908]: E0813 00:05:30.410088 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:31.411032 kubelet[1908]: E0813 00:05:31.410981 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:32.411832 kubelet[1908]: E0813 00:05:32.411783 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:33.287923 env[1486]: time="2025-08-13T00:05:33.287846979Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:33.293825 env[1486]: time="2025-08-13T00:05:33.293777535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:33.298237 env[1486]: time="2025-08-13T00:05:33.298188573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:33.298982 env[1486]: time="2025-08-13T00:05:33.298942612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 00:05:33.301728 env[1486]: time="2025-08-13T00:05:33.301672811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:05:33.307202 env[1486]: time="2025-08-13T00:05:33.307151808Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:05:33.328530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065869139.mount: Deactivated successfully. Aug 13 00:05:33.334662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1845973566.mount: Deactivated successfully. Aug 13 00:05:33.350955 env[1486]: time="2025-08-13T00:05:33.350896983Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\"" Aug 13 00:05:33.351933 env[1486]: time="2025-08-13T00:05:33.351890903Z" level=info msg="StartContainer for \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\"" Aug 13 00:05:33.375161 systemd[1]: Started cri-containerd-f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8.scope. Aug 13 00:05:33.407948 env[1486]: time="2025-08-13T00:05:33.407890231Z" level=info msg="StartContainer for \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\" returns successfully" Aug 13 00:05:33.413368 kubelet[1908]: E0813 00:05:33.412714 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:33.416382 systemd[1]: cri-containerd-f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8.scope: Deactivated successfully. Aug 13 00:05:34.327087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8-rootfs.mount: Deactivated successfully. Aug 13 00:05:34.413495 kubelet[1908]: E0813 00:05:34.413457 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:35.320161 env[1486]: time="2025-08-13T00:05:35.320097415Z" level=info msg="shim disconnected" id=f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8 Aug 13 00:05:35.320161 env[1486]: time="2025-08-13T00:05:35.320152135Z" level=warning msg="cleaning up after shim disconnected" id=f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8 namespace=k8s.io Aug 13 00:05:35.320161 env[1486]: time="2025-08-13T00:05:35.320162935Z" level=info msg="cleaning up dead shim" Aug 13 00:05:35.328909 env[1486]: time="2025-08-13T00:05:35.328851291Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2123 runtime=io.containerd.runc.v2\n" Aug 13 00:05:35.414584 kubelet[1908]: E0813 00:05:35.414549 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:35.554648 env[1486]: time="2025-08-13T00:05:35.554591660Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:05:35.577932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915690117.mount: Deactivated successfully. Aug 13 00:05:35.584118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016701369.mount: Deactivated successfully. Aug 13 00:05:35.595840 env[1486]: time="2025-08-13T00:05:35.595778599Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\"" Aug 13 00:05:35.596469 env[1486]: time="2025-08-13T00:05:35.596440799Z" level=info msg="StartContainer for \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\"" Aug 13 00:05:35.619949 systemd[1]: Started cri-containerd-025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7.scope. Aug 13 00:05:35.655535 env[1486]: time="2025-08-13T00:05:35.655463250Z" level=info msg="StartContainer for \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\" returns successfully" Aug 13 00:05:35.664474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:05:35.664681 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:05:35.664889 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:05:35.666794 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:05:35.669604 systemd[1]: cri-containerd-025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7.scope: Deactivated successfully. Aug 13 00:05:35.680415 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:05:35.707583 env[1486]: time="2025-08-13T00:05:35.707525824Z" level=info msg="shim disconnected" id=025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7 Aug 13 00:05:35.707946 env[1486]: time="2025-08-13T00:05:35.707904344Z" level=warning msg="cleaning up after shim disconnected" id=025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7 namespace=k8s.io Aug 13 00:05:35.708043 env[1486]: time="2025-08-13T00:05:35.708026744Z" level=info msg="cleaning up dead shim" Aug 13 00:05:35.718090 env[1486]: time="2025-08-13T00:05:35.718034139Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2188 runtime=io.containerd.runc.v2\n" Aug 13 00:05:36.414839 kubelet[1908]: E0813 00:05:36.414778 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:36.556766 env[1486]: time="2025-08-13T00:05:36.556699303Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:05:36.574984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7-rootfs.mount: Deactivated successfully. Aug 13 00:05:36.588824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885782831.mount: Deactivated successfully. Aug 13 00:05:36.598805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252096742.mount: Deactivated successfully. Aug 13 00:05:36.619402 env[1486]: time="2025-08-13T00:05:36.619306594Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\"" Aug 13 00:05:36.621215 env[1486]: time="2025-08-13T00:05:36.621163033Z" level=info msg="StartContainer for \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\"" Aug 13 00:05:36.645480 systemd[1]: Started cri-containerd-720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe.scope. Aug 13 00:05:36.681044 systemd[1]: cri-containerd-720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe.scope: Deactivated successfully. Aug 13 00:05:36.690562 env[1486]: time="2025-08-13T00:05:36.690509201Z" level=info msg="StartContainer for \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\" returns successfully" Aug 13 00:05:36.838724 env[1486]: time="2025-08-13T00:05:36.838676052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:36.846232 env[1486]: time="2025-08-13T00:05:36.846179649Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:36.850106 env[1486]: time="2025-08-13T00:05:36.850052367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:36.853969 env[1486]: time="2025-08-13T00:05:36.853917405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:36.854906 env[1486]: time="2025-08-13T00:05:36.854631685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Aug 13 00:05:36.869819 env[1486]: time="2025-08-13T00:05:36.869768998Z" level=info msg="CreateContainer within sandbox \"d695c91f0b33ac2d7945c1ceaf1df4dd931e8a374cf8c4d8137b453ed8744f1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:05:37.274673 env[1486]: time="2025-08-13T00:05:37.274620339Z" level=info msg="shim disconnected" id=720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe Aug 13 00:05:37.275018 env[1486]: time="2025-08-13T00:05:37.274982499Z" level=warning msg="cleaning up after shim disconnected" id=720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe namespace=k8s.io Aug 13 00:05:37.275098 env[1486]: time="2025-08-13T00:05:37.275083179Z" level=info msg="cleaning up dead shim" Aug 13 00:05:37.286252 env[1486]: time="2025-08-13T00:05:37.286206894Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2249 runtime=io.containerd.runc.v2\n" Aug 13 00:05:37.289474 env[1486]: time="2025-08-13T00:05:37.289419332Z" level=info msg="CreateContainer within sandbox \"d695c91f0b33ac2d7945c1ceaf1df4dd931e8a374cf8c4d8137b453ed8744f1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62576afe719f7e37edf2a7105b9e6460217966814687ba339f9e4c5d90f5fd00\"" Aug 13 00:05:37.290624 env[1486]: time="2025-08-13T00:05:37.290578172Z" level=info msg="StartContainer for \"62576afe719f7e37edf2a7105b9e6460217966814687ba339f9e4c5d90f5fd00\"" Aug 13 00:05:37.309339 systemd[1]: Started cri-containerd-62576afe719f7e37edf2a7105b9e6460217966814687ba339f9e4c5d90f5fd00.scope. Aug 13 00:05:37.353375 env[1486]: time="2025-08-13T00:05:37.352227105Z" level=info msg="StartContainer for \"62576afe719f7e37edf2a7105b9e6460217966814687ba339f9e4c5d90f5fd00\" returns successfully" Aug 13 00:05:37.415287 kubelet[1908]: E0813 00:05:37.415232 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:37.563700 env[1486]: time="2025-08-13T00:05:37.563544094Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:05:37.590391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284105231.mount: Deactivated successfully. Aug 13 00:05:37.605017 env[1486]: time="2025-08-13T00:05:37.604490996Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\"" Aug 13 00:05:37.606922 env[1486]: time="2025-08-13T00:05:37.605625275Z" level=info msg="StartContainer for \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\"" Aug 13 00:05:37.619636 kubelet[1908]: I0813 00:05:37.619564 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gnc4q" podStartSLOduration=3.7992685550000003 podStartE2EDuration="16.619541349s" podCreationTimestamp="2025-08-13 00:05:21 +0000 UTC" firstStartedPulling="2025-08-13 00:05:24.035456491 +0000 UTC m=+4.090885524" lastFinishedPulling="2025-08-13 00:05:36.855729285 +0000 UTC m=+16.911158318" observedRunningTime="2025-08-13 00:05:37.577874567 +0000 UTC m=+17.633303640" watchObservedRunningTime="2025-08-13 00:05:37.619541349 +0000 UTC m=+17.674970422" Aug 13 00:05:37.629755 systemd[1]: Started cri-containerd-b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc.scope. Aug 13 00:05:37.673604 systemd[1]: cri-containerd-b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc.scope: Deactivated successfully. Aug 13 00:05:37.674948 env[1486]: time="2025-08-13T00:05:37.674867125Z" level=info msg="StartContainer for \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\" returns successfully" Aug 13 00:05:37.708279 env[1486]: time="2025-08-13T00:05:37.708218591Z" level=info msg="shim disconnected" id=b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc Aug 13 00:05:37.708279 env[1486]: time="2025-08-13T00:05:37.708272791Z" level=warning msg="cleaning up after shim disconnected" id=b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc namespace=k8s.io Aug 13 00:05:37.708279 env[1486]: time="2025-08-13T00:05:37.708282191Z" level=info msg="cleaning up dead shim" Aug 13 00:05:37.716754 env[1486]: time="2025-08-13T00:05:37.716689307Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2416 runtime=io.containerd.runc.v2\n" Aug 13 00:05:38.416229 kubelet[1908]: E0813 00:05:38.416154 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:38.566956 env[1486]: time="2025-08-13T00:05:38.566902874Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:05:38.575356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc-rootfs.mount: Deactivated successfully. Aug 13 00:05:38.595588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192656379.mount: Deactivated successfully. Aug 13 00:05:38.601200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49427742.mount: Deactivated successfully. Aug 13 00:05:38.615701 env[1486]: time="2025-08-13T00:05:38.615640294Z" level=info msg="CreateContainer within sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\"" Aug 13 00:05:38.616654 env[1486]: time="2025-08-13T00:05:38.616621654Z" level=info msg="StartContainer for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\"" Aug 13 00:05:38.631310 systemd[1]: Started cri-containerd-b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8.scope. Aug 13 00:05:38.676744 env[1486]: time="2025-08-13T00:05:38.676182550Z" level=info msg="StartContainer for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" returns successfully" Aug 13 00:05:38.777952 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:05:38.787440 kubelet[1908]: I0813 00:05:38.786257 1908 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:05:39.223361 kernel: Initializing XFRM netlink socket Aug 13 00:05:39.232390 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:05:39.417732 kubelet[1908]: E0813 00:05:39.417648 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:39.598037 kubelet[1908]: I0813 00:05:39.597630 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6gqmd" podStartSLOduration=9.324975172 podStartE2EDuration="18.597610645s" podCreationTimestamp="2025-08-13 00:05:21 +0000 UTC" firstStartedPulling="2025-08-13 00:05:24.027955858 +0000 UTC m=+4.083384931" lastFinishedPulling="2025-08-13 00:05:33.300591331 +0000 UTC m=+13.356020404" observedRunningTime="2025-08-13 00:05:39.596756929 +0000 UTC m=+19.652186002" watchObservedRunningTime="2025-08-13 00:05:39.597610645 +0000 UTC m=+19.653039718" Aug 13 00:05:40.418699 kubelet[1908]: E0813 00:05:40.418652 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:40.891837 systemd-networkd[1633]: cilium_host: Link UP Aug 13 00:05:40.902566 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 00:05:40.902708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:05:40.902907 systemd-networkd[1633]: cilium_net: Link UP Aug 13 00:05:40.903083 systemd-networkd[1633]: cilium_net: Gained carrier Aug 13 00:05:40.905727 systemd-networkd[1633]: cilium_host: Gained carrier Aug 13 00:05:41.001428 systemd-networkd[1633]: cilium_host: Gained IPv6LL Aug 13 00:05:41.039894 systemd-networkd[1633]: cilium_vxlan: Link UP Aug 13 00:05:41.039901 systemd-networkd[1633]: cilium_vxlan: Gained carrier Aug 13 00:05:41.233465 systemd-networkd[1633]: cilium_net: Gained IPv6LL Aug 13 00:05:41.310346 kernel: NET: Registered PF_ALG protocol family Aug 13 00:05:41.400670 kubelet[1908]: E0813 00:05:41.400623 1908 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:41.419214 kubelet[1908]: E0813 00:05:41.419188 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:42.105408 systemd-networkd[1633]: lxc_health: Link UP Aug 13 00:05:42.122798 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:05:42.121621 systemd-networkd[1633]: lxc_health: Gained carrier Aug 13 00:05:42.404616 systemd[1]: Created slice kubepods-besteffort-pod46a49223_da2b_454d_8036_4003a7bb901f.slice. Aug 13 00:05:42.419877 kubelet[1908]: E0813 00:05:42.419809 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:42.433469 systemd-networkd[1633]: cilium_vxlan: Gained IPv6LL Aug 13 00:05:42.473933 kubelet[1908]: I0813 00:05:42.473866 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6f56\" (UniqueName: \"kubernetes.io/projected/46a49223-da2b-454d-8036-4003a7bb901f-kube-api-access-t6f56\") pod \"nginx-deployment-7fcdb87857-vtddf\" (UID: \"46a49223-da2b-454d-8036-4003a7bb901f\") " pod="default/nginx-deployment-7fcdb87857-vtddf" Aug 13 00:05:42.708102 env[1486]: time="2025-08-13T00:05:42.707549056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vtddf,Uid:46a49223-da2b-454d-8036-4003a7bb901f,Namespace:default,Attempt:0,}" Aug 13 00:05:42.784447 systemd-networkd[1633]: lxc14ada7595604: Link UP Aug 13 00:05:42.794346 kernel: eth0: renamed from tmp2d610 Aug 13 00:05:42.807404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc14ada7595604: link becomes ready Aug 13 00:05:42.807583 systemd-networkd[1633]: lxc14ada7595604: Gained carrier Aug 13 00:05:43.420605 kubelet[1908]: E0813 00:05:43.420547 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:43.841495 systemd-networkd[1633]: lxc_health: Gained IPv6LL Aug 13 00:05:44.420980 kubelet[1908]: E0813 00:05:44.420917 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:44.545511 systemd-networkd[1633]: lxc14ada7595604: Gained IPv6LL Aug 13 00:05:45.421945 kubelet[1908]: E0813 00:05:45.421908 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:46.422511 kubelet[1908]: E0813 00:05:46.422449 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:46.786193 env[1486]: time="2025-08-13T00:05:46.786108530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:46.786193 env[1486]: time="2025-08-13T00:05:46.786193608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:46.786703 env[1486]: time="2025-08-13T00:05:46.786220647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:46.786743 env[1486]: time="2025-08-13T00:05:46.786680314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6106ed49fb63804002652b3b71c759ce56d7a36bb4455011f8ced2beb41b20 pid=3006 runtime=io.containerd.runc.v2 Aug 13 00:05:46.802076 systemd[1]: Started cri-containerd-2d6106ed49fb63804002652b3b71c759ce56d7a36bb4455011f8ced2beb41b20.scope. Aug 13 00:05:46.844144 env[1486]: time="2025-08-13T00:05:46.844095908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vtddf,Uid:46a49223-da2b-454d-8036-4003a7bb901f,Namespace:default,Attempt:0,} returns sandbox id \"2d6106ed49fb63804002652b3b71c759ce56d7a36bb4455011f8ced2beb41b20\"" Aug 13 00:05:46.845808 env[1486]: time="2025-08-13T00:05:46.845769062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 00:05:47.423220 kubelet[1908]: E0813 00:05:47.423171 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:48.424069 kubelet[1908]: E0813 00:05:48.424005 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:49.056208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082214159.mount: Deactivated successfully. Aug 13 00:05:49.424552 kubelet[1908]: E0813 00:05:49.424397 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:50.424799 kubelet[1908]: E0813 00:05:50.424752 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:50.814764 env[1486]: time="2025-08-13T00:05:50.814691425Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:50.821059 env[1486]: time="2025-08-13T00:05:50.821014466Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:07abd578947db789c018f907bed24fcc55d80455e9614b35a065bf3af4f3ac27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:50.827440 env[1486]: time="2025-08-13T00:05:50.827385427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:50.831973 env[1486]: time="2025-08-13T00:05:50.831929353Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:05:50.832731 env[1486]: time="2025-08-13T00:05:50.832692054Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:07abd578947db789c018f907bed24fcc55d80455e9614b35a065bf3af4f3ac27\"" Aug 13 00:05:50.840295 env[1486]: time="2025-08-13T00:05:50.840245824Z" level=info msg="CreateContainer within sandbox \"2d6106ed49fb63804002652b3b71c759ce56d7a36bb4455011f8ced2beb41b20\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Aug 13 00:05:50.866105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977100781.mount: Deactivated successfully. Aug 13 00:05:50.872530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2392259301.mount: Deactivated successfully. Aug 13 00:05:50.884650 env[1486]: time="2025-08-13T00:05:50.884600192Z" level=info msg="CreateContainer within sandbox \"2d6106ed49fb63804002652b3b71c759ce56d7a36bb4455011f8ced2beb41b20\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9b8bbeda2988ca4f1ac13cf3efc47270feb06357aa57221b6e6c3df1152863ab\"" Aug 13 00:05:50.885730 env[1486]: time="2025-08-13T00:05:50.885692845Z" level=info msg="StartContainer for \"9b8bbeda2988ca4f1ac13cf3efc47270feb06357aa57221b6e6c3df1152863ab\"" Aug 13 00:05:50.905152 systemd[1]: Started cri-containerd-9b8bbeda2988ca4f1ac13cf3efc47270feb06357aa57221b6e6c3df1152863ab.scope. Aug 13 00:05:50.940742 env[1486]: time="2025-08-13T00:05:50.940679426Z" level=info msg="StartContainer for \"9b8bbeda2988ca4f1ac13cf3efc47270feb06357aa57221b6e6c3df1152863ab\" returns successfully" Aug 13 00:05:51.425160 kubelet[1908]: E0813 00:05:51.425112 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:52.426265 kubelet[1908]: E0813 00:05:52.426225 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:53.427160 kubelet[1908]: E0813 00:05:53.427120 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:54.428617 kubelet[1908]: E0813 00:05:54.428575 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:55.429442 kubelet[1908]: E0813 00:05:55.429398 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:56.430118 kubelet[1908]: E0813 00:05:56.430081 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:57.430973 kubelet[1908]: E0813 00:05:57.430934 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:57.774011 kubelet[1908]: I0813 00:05:57.773937 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-vtddf" podStartSLOduration=11.784769473 podStartE2EDuration="15.773919287s" podCreationTimestamp="2025-08-13 00:05:42 +0000 UTC" firstStartedPulling="2025-08-13 00:05:46.844993923 +0000 UTC m=+26.900422996" lastFinishedPulling="2025-08-13 00:05:50.834143737 +0000 UTC m=+30.889572810" observedRunningTime="2025-08-13 00:05:51.602172678 +0000 UTC m=+31.657601751" watchObservedRunningTime="2025-08-13 00:05:57.773919287 +0000 UTC m=+37.829348360" Aug 13 00:05:57.782723 systemd[1]: Created slice kubepods-besteffort-pod8723e4ba_bf5d_43e2_a5eb_a176036ee177.slice. Aug 13 00:05:57.956352 kubelet[1908]: I0813 00:05:57.956294 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzpt\" (UniqueName: \"kubernetes.io/projected/8723e4ba-bf5d-43e2-a5eb-a176036ee177-kube-api-access-gkzpt\") pod \"nfs-server-provisioner-0\" (UID: \"8723e4ba-bf5d-43e2-a5eb-a176036ee177\") " pod="default/nfs-server-provisioner-0" Aug 13 00:05:57.956516 kubelet[1908]: I0813 00:05:57.956361 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8723e4ba-bf5d-43e2-a5eb-a176036ee177-data\") pod \"nfs-server-provisioner-0\" (UID: \"8723e4ba-bf5d-43e2-a5eb-a176036ee177\") " pod="default/nfs-server-provisioner-0" Aug 13 00:05:58.086129 env[1486]: time="2025-08-13T00:05:58.086014346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8723e4ba-bf5d-43e2-a5eb-a176036ee177,Namespace:default,Attempt:0,}" Aug 13 00:05:58.150967 systemd-networkd[1633]: lxc8240ef45d524: Link UP Aug 13 00:05:58.168551 kernel: eth0: renamed from tmp2cd2f Aug 13 00:05:58.184588 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:05:58.184729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8240ef45d524: link becomes ready Aug 13 00:05:58.185149 systemd-networkd[1633]: lxc8240ef45d524: Gained carrier Aug 13 00:05:58.369545 env[1486]: time="2025-08-13T00:05:58.369050768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:58.369545 env[1486]: time="2025-08-13T00:05:58.369145326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:58.369545 env[1486]: time="2025-08-13T00:05:58.369170765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:58.369545 env[1486]: time="2025-08-13T00:05:58.369466759Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cd2f14e003eed3eff0f1b7b2d7ad22ea83f23e3de68d55bf1be61198e73ab5c pid=3132 runtime=io.containerd.runc.v2 Aug 13 00:05:58.383563 systemd[1]: Started cri-containerd-2cd2f14e003eed3eff0f1b7b2d7ad22ea83f23e3de68d55bf1be61198e73ab5c.scope. Aug 13 00:05:58.423364 env[1486]: time="2025-08-13T00:05:58.422640281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8723e4ba-bf5d-43e2-a5eb-a176036ee177,Namespace:default,Attempt:0,} returns sandbox id \"2cd2f14e003eed3eff0f1b7b2d7ad22ea83f23e3de68d55bf1be61198e73ab5c\"" Aug 13 00:05:58.425441 env[1486]: time="2025-08-13T00:05:58.425403345Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Aug 13 00:05:58.432280 kubelet[1908]: E0813 00:05:58.432200 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:59.432665 kubelet[1908]: E0813 00:05:59.432606 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:05:59.905552 systemd-networkd[1633]: lxc8240ef45d524: Gained IPv6LL Aug 13 00:06:00.433441 kubelet[1908]: E0813 00:06:00.433396 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:00.687295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383864631.mount: Deactivated successfully. Aug 13 00:06:01.400760 kubelet[1908]: E0813 00:06:01.400540 1908 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:01.434040 kubelet[1908]: E0813 00:06:01.433996 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:02.434667 kubelet[1908]: E0813 00:06:02.434601 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:02.704462 env[1486]: time="2025-08-13T00:06:02.704329922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:02.712389 env[1486]: time="2025-08-13T00:06:02.712342735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:02.717426 env[1486]: time="2025-08-13T00:06:02.717379523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:02.723269 env[1486]: time="2025-08-13T00:06:02.723218776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:02.725073 env[1486]: time="2025-08-13T00:06:02.724306236Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Aug 13 00:06:02.732077 env[1486]: time="2025-08-13T00:06:02.732030215Z" level=info msg="CreateContainer within sandbox \"2cd2f14e003eed3eff0f1b7b2d7ad22ea83f23e3de68d55bf1be61198e73ab5c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Aug 13 00:06:02.756851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594791770.mount: Deactivated successfully. Aug 13 00:06:02.762684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851873719.mount: Deactivated successfully. Aug 13 00:06:02.775403 env[1486]: time="2025-08-13T00:06:02.775350063Z" level=info msg="CreateContainer within sandbox \"2cd2f14e003eed3eff0f1b7b2d7ad22ea83f23e3de68d55bf1be61198e73ab5c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e143b92fdff953ad1c176fc739275516c34f39d0356e84d6e07bc270cb96fc91\"" Aug 13 00:06:02.776619 env[1486]: time="2025-08-13T00:06:02.776579720Z" level=info msg="StartContainer for \"e143b92fdff953ad1c176fc739275516c34f39d0356e84d6e07bc270cb96fc91\"" Aug 13 00:06:02.796111 systemd[1]: Started cri-containerd-e143b92fdff953ad1c176fc739275516c34f39d0356e84d6e07bc270cb96fc91.scope. Aug 13 00:06:02.827314 env[1486]: time="2025-08-13T00:06:02.827215034Z" level=info msg="StartContainer for \"e143b92fdff953ad1c176fc739275516c34f39d0356e84d6e07bc270cb96fc91\" returns successfully" Aug 13 00:06:03.435589 kubelet[1908]: E0813 00:06:03.435555 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:03.634143 kubelet[1908]: I0813 00:06:03.634064 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.332846783 podStartE2EDuration="6.634041351s" podCreationTimestamp="2025-08-13 00:05:57 +0000 UTC" firstStartedPulling="2025-08-13 00:05:58.424725759 +0000 UTC m=+38.480154792" lastFinishedPulling="2025-08-13 00:06:02.725920287 +0000 UTC m=+42.781349360" observedRunningTime="2025-08-13 00:06:03.633655078 +0000 UTC m=+43.689084151" watchObservedRunningTime="2025-08-13 00:06:03.634041351 +0000 UTC m=+43.689470424" Aug 13 00:06:04.437227 kubelet[1908]: E0813 00:06:04.437166 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:05.437419 kubelet[1908]: E0813 00:06:05.437370 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:06.438554 kubelet[1908]: E0813 00:06:06.438501 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:07.439403 kubelet[1908]: E0813 00:06:07.439353 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:08.150790 systemd[1]: Created slice kubepods-besteffort-podb647fbdf_700a_427c_b7d4_90cf88c5478c.slice. Aug 13 00:06:08.319009 kubelet[1908]: I0813 00:06:08.318960 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca727cf8-bf61-41a2-85a0-a78d0e8f821e\" (UniqueName: \"kubernetes.io/nfs/b647fbdf-700a-427c-b7d4-90cf88c5478c-pvc-ca727cf8-bf61-41a2-85a0-a78d0e8f821e\") pod \"test-pod-1\" (UID: \"b647fbdf-700a-427c-b7d4-90cf88c5478c\") " pod="default/test-pod-1" Aug 13 00:06:08.319009 kubelet[1908]: I0813 00:06:08.319009 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54q9\" (UniqueName: \"kubernetes.io/projected/b647fbdf-700a-427c-b7d4-90cf88c5478c-kube-api-access-j54q9\") pod \"test-pod-1\" (UID: \"b647fbdf-700a-427c-b7d4-90cf88c5478c\") " pod="default/test-pod-1" Aug 13 00:06:08.440497 kubelet[1908]: E0813 00:06:08.439959 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:08.553351 kernel: FS-Cache: Loaded Aug 13 00:06:08.635925 kernel: RPC: Registered named UNIX socket transport module. Aug 13 00:06:08.636064 kernel: RPC: Registered udp transport module. Aug 13 00:06:08.640022 kernel: RPC: Registered tcp transport module. Aug 13 00:06:08.645259 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Aug 13 00:06:08.784446 kernel: FS-Cache: Netfs 'nfs' registered for caching Aug 13 00:06:08.957091 kernel: NFS: Registering the id_resolver key type Aug 13 00:06:08.957249 kernel: Key type id_resolver registered Aug 13 00:06:08.957274 kernel: Key type id_legacy registered Aug 13 00:06:09.244211 nfsidmap[3249]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.8-a-72bb20ad6b' Aug 13 00:06:09.313961 nfsidmap[3250]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.8-a-72bb20ad6b' Aug 13 00:06:09.354760 env[1486]: time="2025-08-13T00:06:09.354701229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b647fbdf-700a-427c-b7d4-90cf88c5478c,Namespace:default,Attempt:0,}" Aug 13 00:06:09.416032 systemd-networkd[1633]: lxc21101e4a6448: Link UP Aug 13 00:06:09.429392 kernel: eth0: renamed from tmp75d00 Aug 13 00:06:09.440986 kubelet[1908]: E0813 00:06:09.440924 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:09.445434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:06:09.445546 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc21101e4a6448: link becomes ready Aug 13 00:06:09.445250 systemd-networkd[1633]: lxc21101e4a6448: Gained carrier Aug 13 00:06:09.635475 env[1486]: time="2025-08-13T00:06:09.634843761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:06:09.635475 env[1486]: time="2025-08-13T00:06:09.634893240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:06:09.635475 env[1486]: time="2025-08-13T00:06:09.634903760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:06:09.636146 env[1486]: time="2025-08-13T00:06:09.635900385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75d003d872df622705e86c86057735ab4ab0868a0b031e34109d9cd9ef6601b4 pid=3275 runtime=io.containerd.runc.v2 Aug 13 00:06:09.656197 systemd[1]: run-containerd-runc-k8s.io-75d003d872df622705e86c86057735ab4ab0868a0b031e34109d9cd9ef6601b4-runc.jv8hhb.mount: Deactivated successfully. Aug 13 00:06:09.660636 systemd[1]: Started cri-containerd-75d003d872df622705e86c86057735ab4ab0868a0b031e34109d9cd9ef6601b4.scope. Aug 13 00:06:09.694948 env[1486]: time="2025-08-13T00:06:09.694892437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b647fbdf-700a-427c-b7d4-90cf88c5478c,Namespace:default,Attempt:0,} returns sandbox id \"75d003d872df622705e86c86057735ab4ab0868a0b031e34109d9cd9ef6601b4\"" Aug 13 00:06:09.696727 env[1486]: time="2025-08-13T00:06:09.696684650Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Aug 13 00:06:10.011117 env[1486]: time="2025-08-13T00:06:10.011053979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:10.021375 env[1486]: time="2025-08-13T00:06:10.021297225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:07abd578947db789c018f907bed24fcc55d80455e9614b35a065bf3af4f3ac27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:10.025854 env[1486]: time="2025-08-13T00:06:10.025807198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:10.030527 env[1486]: time="2025-08-13T00:06:10.030472087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:a6969d434cb816d30787e9f7ab16b632e12dc05a2c8f4dae701d83ef2199c985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:10.031438 env[1486]: time="2025-08-13T00:06:10.031398994Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:07abd578947db789c018f907bed24fcc55d80455e9614b35a065bf3af4f3ac27\"" Aug 13 00:06:10.039982 env[1486]: time="2025-08-13T00:06:10.039932665Z" level=info msg="CreateContainer within sandbox \"75d003d872df622705e86c86057735ab4ab0868a0b031e34109d9cd9ef6601b4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Aug 13 00:06:10.080685 env[1486]: time="2025-08-13T00:06:10.080628934Z" level=info msg="CreateContainer within sandbox \"75d003d872df622705e86c86057735ab4ab0868a0b031e34109d9cd9ef6601b4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9c8c46020ed46269546fd9581c5e2c33a8b53c8d3f35fa482293dacd4759a90d\"" Aug 13 00:06:10.081899 env[1486]: time="2025-08-13T00:06:10.081858676Z" level=info msg="StartContainer for \"9c8c46020ed46269546fd9581c5e2c33a8b53c8d3f35fa482293dacd4759a90d\"" Aug 13 00:06:10.099197 systemd[1]: Started cri-containerd-9c8c46020ed46269546fd9581c5e2c33a8b53c8d3f35fa482293dacd4759a90d.scope. Aug 13 00:06:10.137423 env[1486]: time="2025-08-13T00:06:10.137362203Z" level=info msg="StartContainer for \"9c8c46020ed46269546fd9581c5e2c33a8b53c8d3f35fa482293dacd4759a90d\" returns successfully" Aug 13 00:06:10.441483 kubelet[1908]: E0813 00:06:10.441353 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:10.529563 systemd-networkd[1633]: lxc21101e4a6448: Gained IPv6LL Aug 13 00:06:10.637088 kubelet[1908]: I0813 00:06:10.637025 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=12.2994716 podStartE2EDuration="12.637005341s" podCreationTimestamp="2025-08-13 00:05:58 +0000 UTC" firstStartedPulling="2025-08-13 00:06:09.695923022 +0000 UTC m=+49.751352055" lastFinishedPulling="2025-08-13 00:06:10.033456723 +0000 UTC m=+50.088885796" observedRunningTime="2025-08-13 00:06:10.636770665 +0000 UTC m=+50.692199738" watchObservedRunningTime="2025-08-13 00:06:10.637005341 +0000 UTC m=+50.692434414" Aug 13 00:06:10.640255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39943497.mount: Deactivated successfully. Aug 13 00:06:11.442418 kubelet[1908]: E0813 00:06:11.442371 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:12.442965 kubelet[1908]: E0813 00:06:12.442916 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:13.444277 kubelet[1908]: E0813 00:06:13.444220 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:14.444769 kubelet[1908]: E0813 00:06:14.444699 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:14.533689 systemd[1]: run-containerd-runc-k8s.io-b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8-runc.AH91dF.mount: Deactivated successfully. Aug 13 00:06:14.551268 env[1486]: time="2025-08-13T00:06:14.551198036Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:06:14.559115 env[1486]: time="2025-08-13T00:06:14.559064209Z" level=info msg="StopContainer for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" with timeout 2 (s)" Aug 13 00:06:14.559685 env[1486]: time="2025-08-13T00:06:14.559628921Z" level=info msg="Stop container \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" with signal terminated" Aug 13 00:06:14.566398 systemd-networkd[1633]: lxc_health: Link DOWN Aug 13 00:06:14.566406 systemd-networkd[1633]: lxc_health: Lost carrier Aug 13 00:06:14.600107 systemd[1]: cri-containerd-b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8.scope: Deactivated successfully. Aug 13 00:06:14.600495 systemd[1]: cri-containerd-b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8.scope: Consumed 6.744s CPU time. Aug 13 00:06:14.618938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8-rootfs.mount: Deactivated successfully. Aug 13 00:06:15.445554 kubelet[1908]: E0813 00:06:15.445490 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:15.545585 env[1486]: time="2025-08-13T00:06:15.545532622Z" level=info msg="shim disconnected" id=b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8 Aug 13 00:06:15.545916 env[1486]: time="2025-08-13T00:06:15.545893058Z" level=warning msg="cleaning up after shim disconnected" id=b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8 namespace=k8s.io Aug 13 00:06:15.546022 env[1486]: time="2025-08-13T00:06:15.546006576Z" level=info msg="cleaning up dead shim" Aug 13 00:06:15.554379 env[1486]: time="2025-08-13T00:06:15.554309785Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3408 runtime=io.containerd.runc.v2\n" Aug 13 00:06:15.559746 env[1486]: time="2025-08-13T00:06:15.559689753Z" level=info msg="StopContainer for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" returns successfully" Aug 13 00:06:15.560634 env[1486]: time="2025-08-13T00:06:15.560602181Z" level=info msg="StopPodSandbox for \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\"" Aug 13 00:06:15.560851 env[1486]: time="2025-08-13T00:06:15.560827258Z" level=info msg="Container to stop \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:15.560934 env[1486]: time="2025-08-13T00:06:15.560915537Z" level=info msg="Container to stop \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:15.560999 env[1486]: time="2025-08-13T00:06:15.560981696Z" level=info msg="Container to stop \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:15.561064 env[1486]: time="2025-08-13T00:06:15.561046975Z" level=info msg="Container to stop \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:15.561129 env[1486]: time="2025-08-13T00:06:15.561112174Z" level=info msg="Container to stop \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:15.563129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab-shm.mount: Deactivated successfully. Aug 13 00:06:15.570257 systemd[1]: cri-containerd-1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab.scope: Deactivated successfully. Aug 13 00:06:15.593204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab-rootfs.mount: Deactivated successfully. Aug 13 00:06:15.605235 env[1486]: time="2025-08-13T00:06:15.605169946Z" level=info msg="shim disconnected" id=1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab Aug 13 00:06:15.605235 env[1486]: time="2025-08-13T00:06:15.605237665Z" level=warning msg="cleaning up after shim disconnected" id=1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab namespace=k8s.io Aug 13 00:06:15.605563 env[1486]: time="2025-08-13T00:06:15.605249785Z" level=info msg="cleaning up dead shim" Aug 13 00:06:15.613536 env[1486]: time="2025-08-13T00:06:15.613479235Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3438 runtime=io.containerd.runc.v2\n" Aug 13 00:06:15.613860 env[1486]: time="2025-08-13T00:06:15.613825031Z" level=info msg="TearDown network for sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" successfully" Aug 13 00:06:15.613896 env[1486]: time="2025-08-13T00:06:15.613856870Z" level=info msg="StopPodSandbox for \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" returns successfully" Aug 13 00:06:15.635242 kubelet[1908]: I0813 00:06:15.635172 1908 scope.go:117] "RemoveContainer" containerID="b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8" Aug 13 00:06:15.636754 env[1486]: time="2025-08-13T00:06:15.636713365Z" level=info msg="RemoveContainer for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\"" Aug 13 00:06:15.644918 env[1486]: time="2025-08-13T00:06:15.644870336Z" level=info msg="RemoveContainer for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" returns successfully" Aug 13 00:06:15.645434 kubelet[1908]: I0813 00:06:15.645407 1908 scope.go:117] "RemoveContainer" containerID="b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc" Aug 13 00:06:15.646749 env[1486]: time="2025-08-13T00:06:15.646714832Z" level=info msg="RemoveContainer for \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\"" Aug 13 00:06:15.653913 env[1486]: time="2025-08-13T00:06:15.653863336Z" level=info msg="RemoveContainer for \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\" returns successfully" Aug 13 00:06:15.654734 kubelet[1908]: I0813 00:06:15.654699 1908 scope.go:117] "RemoveContainer" containerID="720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe" Aug 13 00:06:15.656405 env[1486]: time="2025-08-13T00:06:15.656373463Z" level=info msg="RemoveContainer for \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\"" Aug 13 00:06:15.664141 env[1486]: time="2025-08-13T00:06:15.664095960Z" level=info msg="RemoveContainer for \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\" returns successfully" Aug 13 00:06:15.664663 kubelet[1908]: I0813 00:06:15.664629 1908 scope.go:117] "RemoveContainer" containerID="025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7" Aug 13 00:06:15.666059 env[1486]: time="2025-08-13T00:06:15.666014134Z" level=info msg="RemoveContainer for \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\"" Aug 13 00:06:15.672757 env[1486]: time="2025-08-13T00:06:15.672707765Z" level=info msg="RemoveContainer for \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\" returns successfully" Aug 13 00:06:15.673116 kubelet[1908]: I0813 00:06:15.672987 1908 scope.go:117] "RemoveContainer" containerID="f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8" Aug 13 00:06:15.674705 env[1486]: time="2025-08-13T00:06:15.674668019Z" level=info msg="RemoveContainer for \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\"" Aug 13 00:06:15.682255 env[1486]: time="2025-08-13T00:06:15.682194718Z" level=info msg="RemoveContainer for \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\" returns successfully" Aug 13 00:06:15.682740 kubelet[1908]: I0813 00:06:15.682604 1908 scope.go:117] "RemoveContainer" containerID="b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8" Aug 13 00:06:15.683104 env[1486]: time="2025-08-13T00:06:15.683029867Z" level=error msg="ContainerStatus for \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\": not found" Aug 13 00:06:15.683556 kubelet[1908]: E0813 00:06:15.683360 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\": not found" containerID="b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8" Aug 13 00:06:15.683556 kubelet[1908]: I0813 00:06:15.683397 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8"} err="failed to get container status \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b30ade6c1715aabdc5915d4be25d20005a0cb20acaa86fe2c5a84d12be81c9d8\": not found" Aug 13 00:06:15.683556 kubelet[1908]: I0813 00:06:15.683451 1908 scope.go:117] "RemoveContainer" containerID="b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc" Aug 13 00:06:15.683830 env[1486]: time="2025-08-13T00:06:15.683691818Z" level=error msg="ContainerStatus for \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\": not found" Aug 13 00:06:15.683953 kubelet[1908]: E0813 00:06:15.683926 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\": not found" containerID="b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc" Aug 13 00:06:15.684003 kubelet[1908]: I0813 00:06:15.683958 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc"} err="failed to get container status \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3dde35ee8cdf84bf21fcfb30ccc6c2a47439f0e1ca1edfcd96a0879c14388fc\": not found" Aug 13 00:06:15.684003 kubelet[1908]: I0813 00:06:15.683980 1908 scope.go:117] "RemoveContainer" containerID="720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe" Aug 13 00:06:15.684247 env[1486]: time="2025-08-13T00:06:15.684188332Z" level=error msg="ContainerStatus for \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\": not found" Aug 13 00:06:15.684428 kubelet[1908]: E0813 00:06:15.684402 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\": not found" containerID="720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe" Aug 13 00:06:15.684478 kubelet[1908]: I0813 00:06:15.684435 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe"} err="failed to get container status \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\": rpc error: code = NotFound desc = an error occurred when try to find container \"720a043623f64a3761e5a1185e9febb6e33d8e3e02e4479584fb0c77059dfafe\": not found" Aug 13 00:06:15.684478 kubelet[1908]: I0813 00:06:15.684452 1908 scope.go:117] "RemoveContainer" containerID="025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7" Aug 13 00:06:15.684680 env[1486]: time="2025-08-13T00:06:15.684624966Z" level=error msg="ContainerStatus for \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\": not found" Aug 13 00:06:15.684800 kubelet[1908]: E0813 00:06:15.684774 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\": not found" containerID="025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7" Aug 13 00:06:15.684841 kubelet[1908]: I0813 00:06:15.684802 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7"} err="failed to get container status \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\": rpc error: code = NotFound desc = an error occurred when try to find container \"025eb030fc9166c1ed04fdbbf8cfdc6d04327e4ce6686d85e7a27d8659146ef7\": not found" Aug 13 00:06:15.684841 kubelet[1908]: I0813 00:06:15.684816 1908 scope.go:117] "RemoveContainer" containerID="f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8" Aug 13 00:06:15.685058 env[1486]: time="2025-08-13T00:06:15.685004681Z" level=error msg="ContainerStatus for \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\": not found" Aug 13 00:06:15.685165 kubelet[1908]: E0813 00:06:15.685140 1908 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\": not found" containerID="f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8" Aug 13 00:06:15.685207 kubelet[1908]: I0813 00:06:15.685167 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8"} err="failed to get container status \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1c0f8e90faa78b5e22e26645ea631fe48f693346835d7ff8aa86b61afb0d6c8\": not found" Aug 13 00:06:15.771359 kubelet[1908]: I0813 00:06:15.768522 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-bpf-maps\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771359 kubelet[1908]: I0813 00:06:15.768564 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cni-path\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771359 kubelet[1908]: I0813 00:06:15.768628 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-lib-modules\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771359 kubelet[1908]: I0813 00:06:15.768647 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-cgroup\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771359 kubelet[1908]: I0813 00:06:15.768661 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-xtables-lock\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771359 kubelet[1908]: I0813 00:06:15.768683 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-run\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771723 kubelet[1908]: I0813 00:06:15.768702 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hostproc\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771723 kubelet[1908]: I0813 00:06:15.768718 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-etc-cni-netd\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771723 kubelet[1908]: I0813 00:06:15.768741 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-clustermesh-secrets\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771723 kubelet[1908]: I0813 00:06:15.768760 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-config-path\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771723 kubelet[1908]: I0813 00:06:15.768780 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-net\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771723 kubelet[1908]: I0813 00:06:15.768797 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-kernel\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771865 kubelet[1908]: I0813 00:06:15.768820 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hubble-tls\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771865 kubelet[1908]: I0813 00:06:15.768839 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clntz\" (UniqueName: \"kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-kube-api-access-clntz\") pod \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\" (UID: \"0b15c753-fb50-4c5f-bf72-086bb6f0c77d\") " Aug 13 00:06:15.771865 kubelet[1908]: I0813 00:06:15.769252 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.771865 kubelet[1908]: I0813 00:06:15.769301 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.771865 kubelet[1908]: I0813 00:06:15.769353 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.775120 kubelet[1908]: I0813 00:06:15.769370 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.775120 kubelet[1908]: I0813 00:06:15.769392 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.775120 kubelet[1908]: I0813 00:06:15.769406 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.775120 kubelet[1908]: I0813 00:06:15.769420 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.775120 kubelet[1908]: I0813 00:06:15.771283 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:06:15.773687 systemd[1]: var-lib-kubelet-pods-0b15c753\x2dfb50\x2d4c5f\x2dbf72\x2d086bb6f0c77d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dclntz.mount: Deactivated successfully. Aug 13 00:06:15.775633 kubelet[1908]: I0813 00:06:15.774415 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.776536 kubelet[1908]: I0813 00:06:15.776494 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-kube-api-access-clntz" (OuterVolumeSpecName: "kube-api-access-clntz") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "kube-api-access-clntz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:15.776738 kubelet[1908]: I0813 00:06:15.776719 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.776843 kubelet[1908]: I0813 00:06:15.776828 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:15.785089 kubelet[1908]: I0813 00:06:15.781538 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:06:15.782068 systemd[1]: var-lib-kubelet-pods-0b15c753\x2dfb50\x2d4c5f\x2dbf72\x2d086bb6f0c77d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:06:15.784414 systemd[1]: var-lib-kubelet-pods-0b15c753\x2dfb50\x2d4c5f\x2dbf72\x2d086bb6f0c77d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:06:15.786436 kubelet[1908]: I0813 00:06:15.786393 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b15c753-fb50-4c5f-bf72-086bb6f0c77d" (UID: "0b15c753-fb50-4c5f-bf72-086bb6f0c77d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:15.869766 kubelet[1908]: I0813 00:06:15.869721 1908 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-lib-modules\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.869766 kubelet[1908]: I0813 00:06:15.869758 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-cgroup\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.869766 kubelet[1908]: I0813 00:06:15.869768 1908 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-xtables-lock\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.869766 kubelet[1908]: I0813 00:06:15.869776 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-run\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869786 1908 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hostproc\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869794 1908 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-etc-cni-netd\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869804 1908 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-clustermesh-secrets\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869814 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cilium-config-path\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869822 1908 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-net\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869829 1908 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-host-proc-sys-kernel\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869837 1908 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-hubble-tls\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870014 kubelet[1908]: I0813 00:06:15.869845 1908 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clntz\" (UniqueName: \"kubernetes.io/projected/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-kube-api-access-clntz\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870189 kubelet[1908]: I0813 00:06:15.869854 1908 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-bpf-maps\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.870189 kubelet[1908]: I0813 00:06:15.869861 1908 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b15c753-fb50-4c5f-bf72-086bb6f0c77d-cni-path\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:15.939965 systemd[1]: Removed slice kubepods-burstable-pod0b15c753_fb50_4c5f_bf72_086bb6f0c77d.slice. Aug 13 00:06:15.940062 systemd[1]: kubepods-burstable-pod0b15c753_fb50_4c5f_bf72_086bb6f0c77d.slice: Consumed 6.853s CPU time. Aug 13 00:06:16.446501 kubelet[1908]: E0813 00:06:16.446455 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:16.503608 kubelet[1908]: E0813 00:06:16.503562 1908 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:06:17.446624 kubelet[1908]: E0813 00:06:17.446570 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:17.521808 kubelet[1908]: I0813 00:06:17.521759 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b15c753-fb50-4c5f-bf72-086bb6f0c77d" path="/var/lib/kubelet/pods/0b15c753-fb50-4c5f-bf72-086bb6f0c77d/volumes" Aug 13 00:06:18.447629 kubelet[1908]: E0813 00:06:18.447582 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:18.692379 systemd[1]: Created slice kubepods-besteffort-pod14790085_c133_4c9a_b2a6_45a9af0b501c.slice. Aug 13 00:06:18.735899 systemd[1]: Created slice kubepods-burstable-pod0feb190e_6608_41ca_ae74_25497d9e259f.slice. Aug 13 00:06:18.784905 kubelet[1908]: I0813 00:06:18.784856 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14790085-c133-4c9a-b2a6-45a9af0b501c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sdpf5\" (UID: \"14790085-c133-4c9a-b2a6-45a9af0b501c\") " pod="kube-system/cilium-operator-6c4d7847fc-sdpf5" Aug 13 00:06:18.784905 kubelet[1908]: I0813 00:06:18.784900 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhhv\" (UniqueName: \"kubernetes.io/projected/14790085-c133-4c9a-b2a6-45a9af0b501c-kube-api-access-6jhhv\") pod \"cilium-operator-6c4d7847fc-sdpf5\" (UID: \"14790085-c133-4c9a-b2a6-45a9af0b501c\") " pod="kube-system/cilium-operator-6c4d7847fc-sdpf5" Aug 13 00:06:18.885863 kubelet[1908]: I0813 00:06:18.885820 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-bpf-maps\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886046 kubelet[1908]: I0813 00:06:18.886029 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-hostproc\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886136 kubelet[1908]: I0813 00:06:18.886119 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cni-path\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886223 kubelet[1908]: I0813 00:06:18.886206 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-net\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886312 kubelet[1908]: I0813 00:06:18.886298 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-etc-cni-netd\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886445 kubelet[1908]: I0813 00:06:18.886430 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-kernel\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886539 kubelet[1908]: I0813 00:06:18.886524 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-clustermesh-secrets\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886620 kubelet[1908]: I0813 00:06:18.886606 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-hubble-tls\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886703 kubelet[1908]: I0813 00:06:18.886691 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-cgroup\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886795 kubelet[1908]: I0813 00:06:18.886778 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-run\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886888 kubelet[1908]: I0813 00:06:18.886873 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-lib-modules\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.886970 kubelet[1908]: I0813 00:06:18.886955 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-config-path\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.887054 kubelet[1908]: I0813 00:06:18.887037 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-ipsec-secrets\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.887150 kubelet[1908]: I0813 00:06:18.887135 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-xtables-lock\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:18.887233 kubelet[1908]: I0813 00:06:18.887218 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bqzf\" (UniqueName: \"kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-kube-api-access-4bqzf\") pod \"cilium-c4lkc\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " pod="kube-system/cilium-c4lkc" Aug 13 00:06:19.001176 env[1486]: time="2025-08-13T00:06:18.997696910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sdpf5,Uid:14790085-c133-4c9a-b2a6-45a9af0b501c,Namespace:kube-system,Attempt:0,}" Aug 13 00:06:19.032731 env[1486]: time="2025-08-13T00:06:19.032627164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:06:19.032731 env[1486]: time="2025-08-13T00:06:19.032678643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:06:19.032731 env[1486]: time="2025-08-13T00:06:19.032689323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:06:19.033166 env[1486]: time="2025-08-13T00:06:19.033122438Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35a30acc258e0317e29aeb69ecaec5db86cff5e777cc9e9863f0e99dca32522c pid=3466 runtime=io.containerd.runc.v2 Aug 13 00:06:19.045584 systemd[1]: Started cri-containerd-35a30acc258e0317e29aeb69ecaec5db86cff5e777cc9e9863f0e99dca32522c.scope. Aug 13 00:06:19.048303 env[1486]: time="2025-08-13T00:06:19.048256573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c4lkc,Uid:0feb190e-6608-41ca-ae74-25497d9e259f,Namespace:kube-system,Attempt:0,}" Aug 13 00:06:19.079976 env[1486]: time="2025-08-13T00:06:19.079922307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sdpf5,Uid:14790085-c133-4c9a-b2a6-45a9af0b501c,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a30acc258e0317e29aeb69ecaec5db86cff5e777cc9e9863f0e99dca32522c\"" Aug 13 00:06:19.082218 env[1486]: time="2025-08-13T00:06:19.082169719Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:06:19.084169 env[1486]: time="2025-08-13T00:06:19.084074536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:06:19.084169 env[1486]: time="2025-08-13T00:06:19.084124016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:06:19.084407 env[1486]: time="2025-08-13T00:06:19.084155135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:06:19.086043 env[1486]: time="2025-08-13T00:06:19.084532851Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e pid=3506 runtime=io.containerd.runc.v2 Aug 13 00:06:19.096397 systemd[1]: Started cri-containerd-5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e.scope. Aug 13 00:06:19.124894 env[1486]: time="2025-08-13T00:06:19.124800599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c4lkc,Uid:0feb190e-6608-41ca-ae74-25497d9e259f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\"" Aug 13 00:06:19.134441 env[1486]: time="2025-08-13T00:06:19.134382123Z" level=info msg="CreateContainer within sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:06:19.158265 env[1486]: time="2025-08-13T00:06:19.158199192Z" level=info msg="CreateContainer within sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\"" Aug 13 00:06:19.159507 env[1486]: time="2025-08-13T00:06:19.159464137Z" level=info msg="StartContainer for \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\"" Aug 13 00:06:19.175578 systemd[1]: Started cri-containerd-de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7.scope. Aug 13 00:06:19.189769 systemd[1]: cri-containerd-de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7.scope: Deactivated successfully. Aug 13 00:06:19.224721 env[1486]: time="2025-08-13T00:06:19.224654062Z" level=info msg="shim disconnected" id=de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7 Aug 13 00:06:19.224721 env[1486]: time="2025-08-13T00:06:19.224717061Z" level=warning msg="cleaning up after shim disconnected" id=de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7 namespace=k8s.io Aug 13 00:06:19.224721 env[1486]: time="2025-08-13T00:06:19.224726181Z" level=info msg="cleaning up dead shim" Aug 13 00:06:19.232815 env[1486]: time="2025-08-13T00:06:19.232752283Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3567 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:06:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:06:19.233168 env[1486]: time="2025-08-13T00:06:19.233057519Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Aug 13 00:06:19.236434 env[1486]: time="2025-08-13T00:06:19.236373799Z" level=error msg="Failed to pipe stdout of container \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\"" error="reading from a closed fifo" Aug 13 00:06:19.236729 env[1486]: time="2025-08-13T00:06:19.236619476Z" level=error msg="Failed to pipe stderr of container \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\"" error="reading from a closed fifo" Aug 13 00:06:19.240833 env[1486]: time="2025-08-13T00:06:19.240748745Z" level=error msg="StartContainer for \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:06:19.241365 kubelet[1908]: E0813 00:06:19.241279 1908 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7" Aug 13 00:06:19.241880 kubelet[1908]: E0813 00:06:19.241818 1908 kuberuntime_manager.go:1358] "Unhandled Error" err=< Aug 13 00:06:19.241880 kubelet[1908]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:06:19.241880 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:06:19.241880 kubelet[1908]: rm /hostbin/cilium-mount Aug 13 00:06:19.242004 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bqzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-c4lkc_kube-system(0feb190e-6608-41ca-ae74-25497d9e259f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:06:19.242004 kubelet[1908]: > logger="UnhandledError" Aug 13 00:06:19.243341 kubelet[1908]: E0813 00:06:19.243269 1908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-c4lkc" podUID="0feb190e-6608-41ca-ae74-25497d9e259f" Aug 13 00:06:19.449237 kubelet[1908]: E0813 00:06:19.447881 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:19.647824 env[1486]: time="2025-08-13T00:06:19.647777262Z" level=info msg="CreateContainer within sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Aug 13 00:06:19.683308 env[1486]: time="2025-08-13T00:06:19.683244029Z" level=info msg="CreateContainer within sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\"" Aug 13 00:06:19.684435 env[1486]: time="2025-08-13T00:06:19.684399335Z" level=info msg="StartContainer for \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\"" Aug 13 00:06:19.699607 systemd[1]: Started cri-containerd-6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e.scope. Aug 13 00:06:19.712866 systemd[1]: cri-containerd-6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e.scope: Deactivated successfully. Aug 13 00:06:19.730207 env[1486]: time="2025-08-13T00:06:19.730146097Z" level=info msg="shim disconnected" id=6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e Aug 13 00:06:19.730207 env[1486]: time="2025-08-13T00:06:19.730202736Z" level=warning msg="cleaning up after shim disconnected" id=6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e namespace=k8s.io Aug 13 00:06:19.730207 env[1486]: time="2025-08-13T00:06:19.730213976Z" level=info msg="cleaning up dead shim" Aug 13 00:06:19.738295 env[1486]: time="2025-08-13T00:06:19.738225879Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3605 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:06:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:06:19.738611 env[1486]: time="2025-08-13T00:06:19.738546795Z" level=error msg="copy shim log" error="read /proc/self/fd/70: file already closed" Aug 13 00:06:19.740439 env[1486]: time="2025-08-13T00:06:19.740390732Z" level=error msg="Failed to pipe stdout of container \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\"" error="reading from a closed fifo" Aug 13 00:06:19.740655 env[1486]: time="2025-08-13T00:06:19.740627769Z" level=error msg="Failed to pipe stderr of container \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\"" error="reading from a closed fifo" Aug 13 00:06:19.744720 env[1486]: time="2025-08-13T00:06:19.744658840Z" level=error msg="StartContainer for \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:06:19.744981 kubelet[1908]: E0813 00:06:19.744932 1908 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e" Aug 13 00:06:19.745572 kubelet[1908]: E0813 00:06:19.745526 1908 kuberuntime_manager.go:1358] "Unhandled Error" err=< Aug 13 00:06:19.745572 kubelet[1908]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:06:19.745572 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:06:19.745572 kubelet[1908]: rm /hostbin/cilium-mount Aug 13 00:06:19.745572 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bqzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-c4lkc_kube-system(0feb190e-6608-41ca-ae74-25497d9e259f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:06:19.745572 kubelet[1908]: > logger="UnhandledError" Aug 13 00:06:19.747000 kubelet[1908]: E0813 00:06:19.746941 1908 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-c4lkc" podUID="0feb190e-6608-41ca-ae74-25497d9e259f" Aug 13 00:06:20.448432 kubelet[1908]: E0813 00:06:20.448374 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:20.651632 kubelet[1908]: I0813 00:06:20.651598 1908 scope.go:117] "RemoveContainer" containerID="de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7" Aug 13 00:06:20.652438 env[1486]: time="2025-08-13T00:06:20.652202706Z" level=info msg="StopPodSandbox for \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\"" Aug 13 00:06:20.652438 env[1486]: time="2025-08-13T00:06:20.652268465Z" level=info msg="Container to stop \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:20.652438 env[1486]: time="2025-08-13T00:06:20.652283425Z" level=info msg="Container to stop \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:20.654030 env[1486]: time="2025-08-13T00:06:20.653466011Z" level=info msg="RemoveContainer for \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\"" Aug 13 00:06:20.654659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e-shm.mount: Deactivated successfully. Aug 13 00:06:20.662730 systemd[1]: cri-containerd-5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e.scope: Deactivated successfully. Aug 13 00:06:20.664866 env[1486]: time="2025-08-13T00:06:20.664814395Z" level=info msg="RemoveContainer for \"de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7\" returns successfully" Aug 13 00:06:20.681388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e-rootfs.mount: Deactivated successfully. Aug 13 00:06:20.693031 env[1486]: time="2025-08-13T00:06:20.692966459Z" level=info msg="shim disconnected" id=5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e Aug 13 00:06:20.693031 env[1486]: time="2025-08-13T00:06:20.693027179Z" level=warning msg="cleaning up after shim disconnected" id=5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e namespace=k8s.io Aug 13 00:06:20.693270 env[1486]: time="2025-08-13T00:06:20.693036659Z" level=info msg="cleaning up dead shim" Aug 13 00:06:20.701613 env[1486]: time="2025-08-13T00:06:20.700899925Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3636 runtime=io.containerd.runc.v2\n" Aug 13 00:06:20.701613 env[1486]: time="2025-08-13T00:06:20.701219281Z" level=info msg="TearDown network for sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" successfully" Aug 13 00:06:20.701613 env[1486]: time="2025-08-13T00:06:20.701242521Z" level=info msg="StopPodSandbox for \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" returns successfully" Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800517 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-bpf-maps\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800559 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cni-path\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800577 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-net\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800592 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-etc-cni-netd\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800609 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-xtables-lock\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800595 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800627 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-hostproc\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800645 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-kernel\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800665 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-hubble-tls\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800681 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-cgroup\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800645 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800694 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-run\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800712 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-lib-modules\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800729 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-config-path\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800746 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-ipsec-secrets\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801264 kubelet[1908]: I0813 00:06:20.800765 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bqzf\" (UniqueName: \"kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-kube-api-access-4bqzf\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800784 1908 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-clustermesh-secrets\") pod \"0feb190e-6608-41ca-ae74-25497d9e259f\" (UID: \"0feb190e-6608-41ca-ae74-25497d9e259f\") " Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800820 1908 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-bpf-maps\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800830 1908 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-net\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800656 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cni-path" (OuterVolumeSpecName: "cni-path") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800667 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800678 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800708 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-hostproc" (OuterVolumeSpecName: "hostproc") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800718 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.800735 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.801448 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.801820 kubelet[1908]: I0813 00:06:20.801734 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:20.806331 systemd[1]: var-lib-kubelet-pods-0feb190e\x2d6608\x2d41ca\x2dae74\x2d25497d9e259f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:06:20.809666 systemd[1]: var-lib-kubelet-pods-0feb190e\x2d6608\x2d41ca\x2dae74\x2d25497d9e259f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:06:20.814220 kubelet[1908]: I0813 00:06:20.814176 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:06:20.814514 kubelet[1908]: I0813 00:06:20.814482 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:06:20.814643 kubelet[1908]: I0813 00:06:20.814517 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:20.814643 kubelet[1908]: I0813 00:06:20.814570 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:06:20.815202 kubelet[1908]: I0813 00:06:20.815179 1908 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-kube-api-access-4bqzf" (OuterVolumeSpecName: "kube-api-access-4bqzf") pod "0feb190e-6608-41ca-ae74-25497d9e259f" (UID: "0feb190e-6608-41ca-ae74-25497d9e259f"). InnerVolumeSpecName "kube-api-access-4bqzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:20.901799 systemd[1]: var-lib-kubelet-pods-0feb190e\x2d6608\x2d41ca\x2dae74\x2d25497d9e259f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4bqzf.mount: Deactivated successfully. Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902097 1908 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-hostproc\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902126 1908 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-host-proc-sys-kernel\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902139 1908 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-hubble-tls\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902148 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-cgroup\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902156 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-run\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902165 1908 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-lib-modules\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902173 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-config-path\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902181 1908 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-cilium-ipsec-secrets\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902189 1908 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4bqzf\" (UniqueName: \"kubernetes.io/projected/0feb190e-6608-41ca-ae74-25497d9e259f-kube-api-access-4bqzf\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902199 1908 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0feb190e-6608-41ca-ae74-25497d9e259f-clustermesh-secrets\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902209 1908 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-cni-path\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902217 1908 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-etc-cni-netd\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.902948 kubelet[1908]: I0813 00:06:20.902224 1908 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0feb190e-6608-41ca-ae74-25497d9e259f-xtables-lock\") on node \"10.200.20.21\" DevicePath \"\"" Aug 13 00:06:20.901919 systemd[1]: var-lib-kubelet-pods-0feb190e\x2d6608\x2d41ca\x2dae74\x2d25497d9e259f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:06:21.401040 kubelet[1908]: E0813 00:06:21.400990 1908 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:21.415092 kubelet[1908]: I0813 00:06:21.415040 1908 scope.go:117] "RemoveContainer" containerID="6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e" Aug 13 00:06:21.416298 env[1486]: time="2025-08-13T00:06:21.416246779Z" level=info msg="RemoveContainer for \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\"" Aug 13 00:06:21.423671 env[1486]: time="2025-08-13T00:06:21.423614053Z" level=info msg="RemoveContainer for \"6349a2fc79edc8201bff191d246e933d5755bf8c321fdf3809ead4f1cf2e275e\" returns successfully" Aug 13 00:06:21.424793 env[1486]: time="2025-08-13T00:06:21.424755719Z" level=info msg="StopPodSandbox for \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\"" Aug 13 00:06:21.424910 env[1486]: time="2025-08-13T00:06:21.424846318Z" level=info msg="TearDown network for sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" successfully" Aug 13 00:06:21.424910 env[1486]: time="2025-08-13T00:06:21.424878518Z" level=info msg="StopPodSandbox for \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" returns successfully" Aug 13 00:06:21.425408 env[1486]: time="2025-08-13T00:06:21.425378352Z" level=info msg="RemovePodSandbox for \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\"" Aug 13 00:06:21.425490 env[1486]: time="2025-08-13T00:06:21.425408552Z" level=info msg="Forcibly stopping sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\"" Aug 13 00:06:21.425490 env[1486]: time="2025-08-13T00:06:21.425467551Z" level=info msg="TearDown network for sandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" successfully" Aug 13 00:06:21.433216 env[1486]: time="2025-08-13T00:06:21.433162461Z" level=info msg="RemovePodSandbox \"5508889c094244de3d0a5eb74b03b42a376dcfa2cc2314ab9ee640035f31d47e\" returns successfully" Aug 13 00:06:21.433968 env[1486]: time="2025-08-13T00:06:21.433930172Z" level=info msg="StopPodSandbox for \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\"" Aug 13 00:06:21.434078 env[1486]: time="2025-08-13T00:06:21.434032131Z" level=info msg="TearDown network for sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" successfully" Aug 13 00:06:21.434126 env[1486]: time="2025-08-13T00:06:21.434072811Z" level=info msg="StopPodSandbox for \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" returns successfully" Aug 13 00:06:21.434569 env[1486]: time="2025-08-13T00:06:21.434537845Z" level=info msg="RemovePodSandbox for \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\"" Aug 13 00:06:21.434643 env[1486]: time="2025-08-13T00:06:21.434569725Z" level=info msg="Forcibly stopping sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\"" Aug 13 00:06:21.434679 env[1486]: time="2025-08-13T00:06:21.434643044Z" level=info msg="TearDown network for sandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" successfully" Aug 13 00:06:21.439837 env[1486]: time="2025-08-13T00:06:21.439780224Z" level=info msg="RemovePodSandbox \"1fdf40468e7a5cae2e85108877d6d22a5c1d28c4cf8291a7fe84b8c9e7a3a4ab\" returns successfully" Aug 13 00:06:21.449421 kubelet[1908]: E0813 00:06:21.449380 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:21.517009 kubelet[1908]: E0813 00:06:21.516966 1908 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:06:21.525993 systemd[1]: Removed slice kubepods-burstable-pod0feb190e_6608_41ca_ae74_25497d9e259f.slice. Aug 13 00:06:21.764874 env[1486]: time="2025-08-13T00:06:21.764455195Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:21.765919 systemd[1]: Created slice kubepods-burstable-pod64fa2abf_1d24_471a_8309_0f63aea7c8e2.slice. Aug 13 00:06:21.774424 env[1486]: time="2025-08-13T00:06:21.774373999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:21.781214 env[1486]: time="2025-08-13T00:06:21.781167680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:06:21.781899 env[1486]: time="2025-08-13T00:06:21.781862512Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 00:06:21.793081 env[1486]: time="2025-08-13T00:06:21.793021141Z" level=info msg="CreateContainer within sandbox \"35a30acc258e0317e29aeb69ecaec5db86cff5e777cc9e9863f0e99dca32522c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:06:21.815032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362748522.mount: Deactivated successfully. Aug 13 00:06:21.821706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405272866.mount: Deactivated successfully. Aug 13 00:06:21.830651 env[1486]: time="2025-08-13T00:06:21.830593983Z" level=info msg="CreateContainer within sandbox \"35a30acc258e0317e29aeb69ecaec5db86cff5e777cc9e9863f0e99dca32522c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c37279272ef45b6e2b80a8ccf4536df62ea4014a68632999c595e6ccc4e59f16\"" Aug 13 00:06:21.831774 env[1486]: time="2025-08-13T00:06:21.831738329Z" level=info msg="StartContainer for \"c37279272ef45b6e2b80a8ccf4536df62ea4014a68632999c595e6ccc4e59f16\"" Aug 13 00:06:21.850281 systemd[1]: Started cri-containerd-c37279272ef45b6e2b80a8ccf4536df62ea4014a68632999c595e6ccc4e59f16.scope. Aug 13 00:06:21.888955 env[1486]: time="2025-08-13T00:06:21.888885542Z" level=info msg="StartContainer for \"c37279272ef45b6e2b80a8ccf4536df62ea4014a68632999c595e6ccc4e59f16\" returns successfully" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908440 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-lib-modules\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908524 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-host-proc-sys-kernel\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908554 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-cilium-cgroup\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908568 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-bpf-maps\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908607 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/64fa2abf-1d24-471a-8309-0f63aea7c8e2-clustermesh-secrets\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908628 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/64fa2abf-1d24-471a-8309-0f63aea7c8e2-hubble-tls\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908671 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-etc-cni-netd\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908688 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-xtables-lock\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908703 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-hostproc\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908741 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/64fa2abf-1d24-471a-8309-0f63aea7c8e2-cilium-ipsec-secrets\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908763 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64fa2abf-1d24-471a-8309-0f63aea7c8e2-cilium-config-path\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908779 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-cni-path\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908817 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-host-proc-sys-net\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908835 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwkhw\" (UniqueName: \"kubernetes.io/projected/64fa2abf-1d24-471a-8309-0f63aea7c8e2-kube-api-access-kwkhw\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:21.908910 kubelet[1908]: I0813 00:06:21.908853 1908 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/64fa2abf-1d24-471a-8309-0f63aea7c8e2-cilium-run\") pod \"cilium-z8q8p\" (UID: \"64fa2abf-1d24-471a-8309-0f63aea7c8e2\") " pod="kube-system/cilium-z8q8p" Aug 13 00:06:22.076535 env[1486]: time="2025-08-13T00:06:22.075966817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z8q8p,Uid:64fa2abf-1d24-471a-8309-0f63aea7c8e2,Namespace:kube-system,Attempt:0,}" Aug 13 00:06:22.106967 env[1486]: time="2025-08-13T00:06:22.106859224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:06:22.106967 env[1486]: time="2025-08-13T00:06:22.106971262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:06:22.107185 env[1486]: time="2025-08-13T00:06:22.106997622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:06:22.107393 env[1486]: time="2025-08-13T00:06:22.107331218Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f pid=3705 runtime=io.containerd.runc.v2 Aug 13 00:06:22.120014 systemd[1]: Started cri-containerd-94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f.scope. Aug 13 00:06:22.154766 env[1486]: time="2025-08-13T00:06:22.154709837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z8q8p,Uid:64fa2abf-1d24-471a-8309-0f63aea7c8e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\"" Aug 13 00:06:22.163553 env[1486]: time="2025-08-13T00:06:22.163500457Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:06:22.190314 env[1486]: time="2025-08-13T00:06:22.190217792Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01\"" Aug 13 00:06:22.191379 env[1486]: time="2025-08-13T00:06:22.191306259Z" level=info msg="StartContainer for \"64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01\"" Aug 13 00:06:22.206328 systemd[1]: Started cri-containerd-64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01.scope. Aug 13 00:06:22.239365 env[1486]: time="2025-08-13T00:06:22.239292431Z" level=info msg="StartContainer for \"64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01\" returns successfully" Aug 13 00:06:22.244513 systemd[1]: cri-containerd-64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01.scope: Deactivated successfully. Aug 13 00:06:22.518149 kubelet[1908]: W0813 00:06:22.328520 1908 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0feb190e_6608_41ca_ae74_25497d9e259f.slice/cri-containerd-de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7.scope WatchSource:0}: container "de1c3f02d9f4a5a85de53745dccae19a7d564131a99a9d527e9c2b2bf0aa47b7" in namespace "k8s.io": not found Aug 13 00:06:22.518149 kubelet[1908]: E0813 00:06:22.350314 1908 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fa2abf_1d24_471a_8309_0f63aea7c8e2.slice/cri-containerd-94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f.scope\": RecentStats: unable to find data in memory cache]" Aug 13 00:06:22.518149 kubelet[1908]: E0813 00:06:22.449909 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:22.580969 env[1486]: time="2025-08-13T00:06:22.579619784Z" level=info msg="shim disconnected" id=64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01 Aug 13 00:06:22.580969 env[1486]: time="2025-08-13T00:06:22.579675304Z" level=warning msg="cleaning up after shim disconnected" id=64e45f0bc2343d9eb18a8acaadafb201098ce716aff018749c7c26b0eaaa8d01 namespace=k8s.io Aug 13 00:06:22.580969 env[1486]: time="2025-08-13T00:06:22.579685944Z" level=info msg="cleaning up dead shim" Aug 13 00:06:22.582293 kubelet[1908]: I0813 00:06:22.582199 1908 setters.go:618] "Node became not ready" node="10.200.20.21" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:06:22Z","lastTransitionTime":"2025-08-13T00:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:06:22.590097 env[1486]: time="2025-08-13T00:06:22.590049865Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3788 runtime=io.containerd.runc.v2\n" Aug 13 00:06:22.665880 env[1486]: time="2025-08-13T00:06:22.665693081Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:06:22.698742 env[1486]: time="2025-08-13T00:06:22.698672465Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29\"" Aug 13 00:06:22.699287 env[1486]: time="2025-08-13T00:06:22.699261058Z" level=info msg="StartContainer for \"9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29\"" Aug 13 00:06:22.714281 systemd[1]: Started cri-containerd-9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29.scope. Aug 13 00:06:22.725264 kubelet[1908]: I0813 00:06:22.725192 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sdpf5" podStartSLOduration=2.022281208 podStartE2EDuration="4.725173522s" podCreationTimestamp="2025-08-13 00:06:18 +0000 UTC" firstStartedPulling="2025-08-13 00:06:19.081467928 +0000 UTC m=+59.136896961" lastFinishedPulling="2025-08-13 00:06:21.784360202 +0000 UTC m=+61.839789275" observedRunningTime="2025-08-13 00:06:22.687062637 +0000 UTC m=+62.742491710" watchObservedRunningTime="2025-08-13 00:06:22.725173522 +0000 UTC m=+62.780602595" Aug 13 00:06:22.750045 env[1486]: time="2025-08-13T00:06:22.749979959Z" level=info msg="StartContainer for \"9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29\" returns successfully" Aug 13 00:06:22.754861 systemd[1]: cri-containerd-9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29.scope: Deactivated successfully. Aug 13 00:06:22.787001 env[1486]: time="2025-08-13T00:06:22.786870497Z" level=info msg="shim disconnected" id=9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29 Aug 13 00:06:22.787001 env[1486]: time="2025-08-13T00:06:22.786923137Z" level=warning msg="cleaning up after shim disconnected" id=9c9f32540ccdf0003c0d64a475bc64a31f1bc51f286b2dd79ed97067edf12c29 namespace=k8s.io Aug 13 00:06:22.787001 env[1486]: time="2025-08-13T00:06:22.786933297Z" level=info msg="cleaning up dead shim" Aug 13 00:06:22.795657 env[1486]: time="2025-08-13T00:06:22.795600118Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3851 runtime=io.containerd.runc.v2\n" Aug 13 00:06:23.451015 kubelet[1908]: E0813 00:06:23.450966 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:23.522212 kubelet[1908]: I0813 00:06:23.522169 1908 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0feb190e-6608-41ca-ae74-25497d9e259f" path="/var/lib/kubelet/pods/0feb190e-6608-41ca-ae74-25497d9e259f/volumes" Aug 13 00:06:23.672225 env[1486]: time="2025-08-13T00:06:23.672174427Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:06:23.698559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633555665.mount: Deactivated successfully. Aug 13 00:06:23.705713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073355538.mount: Deactivated successfully. Aug 13 00:06:23.720744 env[1486]: time="2025-08-13T00:06:23.720675564Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058\"" Aug 13 00:06:23.721815 env[1486]: time="2025-08-13T00:06:23.721783592Z" level=info msg="StartContainer for \"980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058\"" Aug 13 00:06:23.739410 systemd[1]: Started cri-containerd-980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058.scope. Aug 13 00:06:23.777691 env[1486]: time="2025-08-13T00:06:23.777634768Z" level=info msg="StartContainer for \"980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058\" returns successfully" Aug 13 00:06:23.780004 systemd[1]: cri-containerd-980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058.scope: Deactivated successfully. Aug 13 00:06:23.812774 env[1486]: time="2025-08-13T00:06:23.812722055Z" level=info msg="shim disconnected" id=980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058 Aug 13 00:06:23.813404 env[1486]: time="2025-08-13T00:06:23.813375968Z" level=warning msg="cleaning up after shim disconnected" id=980ce859e76f554eaccca238a30791ba220b8600bea81e3d89faba5607701058 namespace=k8s.io Aug 13 00:06:23.813507 env[1486]: time="2025-08-13T00:06:23.813489327Z" level=info msg="cleaning up dead shim" Aug 13 00:06:23.831251 env[1486]: time="2025-08-13T00:06:23.831192449Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3911 runtime=io.containerd.runc.v2\n" Aug 13 00:06:24.452116 kubelet[1908]: E0813 00:06:24.452051 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:24.674421 env[1486]: time="2025-08-13T00:06:24.674367059Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:06:24.702165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431494656.mount: Deactivated successfully. Aug 13 00:06:24.707287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810075925.mount: Deactivated successfully. Aug 13 00:06:24.721810 env[1486]: time="2025-08-13T00:06:24.721745220Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae\"" Aug 13 00:06:24.722517 env[1486]: time="2025-08-13T00:06:24.722474052Z" level=info msg="StartContainer for \"e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae\"" Aug 13 00:06:24.744479 systemd[1]: Started cri-containerd-e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae.scope. Aug 13 00:06:24.771033 systemd[1]: cri-containerd-e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae.scope: Deactivated successfully. Aug 13 00:06:24.773670 env[1486]: time="2025-08-13T00:06:24.773548293Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fa2abf_1d24_471a_8309_0f63aea7c8e2.slice/cri-containerd-e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae.scope/memory.events\": no such file or directory" Aug 13 00:06:24.778981 env[1486]: time="2025-08-13T00:06:24.778920915Z" level=info msg="StartContainer for \"e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae\" returns successfully" Aug 13 00:06:24.809850 env[1486]: time="2025-08-13T00:06:24.809803617Z" level=info msg="shim disconnected" id=e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae Aug 13 00:06:24.810108 env[1486]: time="2025-08-13T00:06:24.810088214Z" level=warning msg="cleaning up after shim disconnected" id=e5c98c68a6e31aa6afb8dffa651ae4015bde160e75dea9130621619f0a646eae namespace=k8s.io Aug 13 00:06:24.810187 env[1486]: time="2025-08-13T00:06:24.810174013Z" level=info msg="cleaning up dead shim" Aug 13 00:06:24.817822 env[1486]: time="2025-08-13T00:06:24.817765450Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3968 runtime=io.containerd.runc.v2\n" Aug 13 00:06:25.452249 kubelet[1908]: E0813 00:06:25.452194 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:25.682479 env[1486]: time="2025-08-13T00:06:25.682431540Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:06:25.720204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378769855.mount: Deactivated successfully. Aug 13 00:06:25.738334 env[1486]: time="2025-08-13T00:06:25.738258582Z" level=info msg="CreateContainer within sandbox \"94bf3fe044bde1baeb8afe1e0f54f3c78cb6a1636143ce6cab4774036169364f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80\"" Aug 13 00:06:25.739336 env[1486]: time="2025-08-13T00:06:25.739292651Z" level=info msg="StartContainer for \"436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80\"" Aug 13 00:06:25.754203 systemd[1]: Started cri-containerd-436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80.scope. Aug 13 00:06:25.792200 env[1486]: time="2025-08-13T00:06:25.792136365Z" level=info msg="StartContainer for \"436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80\" returns successfully" Aug 13 00:06:26.112536 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Aug 13 00:06:26.452864 kubelet[1908]: E0813 00:06:26.452819 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:26.708525 kubelet[1908]: I0813 00:06:26.708469 1908 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z8q8p" podStartSLOduration=5.70845282 podStartE2EDuration="5.70845282s" podCreationTimestamp="2025-08-13 00:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:06:26.708162503 +0000 UTC m=+66.763591576" watchObservedRunningTime="2025-08-13 00:06:26.70845282 +0000 UTC m=+66.763881893" Aug 13 00:06:26.830801 systemd[1]: run-containerd-runc-k8s.io-436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80-runc.BzdP9e.mount: Deactivated successfully. Aug 13 00:06:27.453648 kubelet[1908]: E0813 00:06:27.453586 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:28.453974 kubelet[1908]: E0813 00:06:28.453928 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:28.957733 systemd-networkd[1633]: lxc_health: Link UP Aug 13 00:06:28.989480 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:06:28.989528 systemd-networkd[1633]: lxc_health: Gained carrier Aug 13 00:06:29.455255 kubelet[1908]: E0813 00:06:29.455113 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:30.305460 systemd-networkd[1633]: lxc_health: Gained IPv6LL Aug 13 00:06:30.456033 kubelet[1908]: E0813 00:06:30.455989 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:31.228261 systemd[1]: run-containerd-runc-k8s.io-436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80-runc.Gm2ryu.mount: Deactivated successfully. Aug 13 00:06:31.457665 kubelet[1908]: E0813 00:06:31.457601 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:32.458641 kubelet[1908]: E0813 00:06:32.458601 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:33.363661 systemd[1]: run-containerd-runc-k8s.io-436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80-runc.SuENGz.mount: Deactivated successfully. Aug 13 00:06:33.459664 kubelet[1908]: E0813 00:06:33.459613 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:34.460615 kubelet[1908]: E0813 00:06:34.460574 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:35.461867 kubelet[1908]: E0813 00:06:35.461823 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:35.501909 systemd[1]: run-containerd-runc-k8s.io-436764dcc6692c34c2185ba541ed6eb57d9d0b29082b9973d7937610ca658b80-runc.vtoG07.mount: Deactivated successfully. Aug 13 00:06:36.461999 kubelet[1908]: E0813 00:06:36.461962 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:37.462710 kubelet[1908]: E0813 00:06:37.462666 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:38.463620 kubelet[1908]: E0813 00:06:38.463571 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:39.464591 kubelet[1908]: E0813 00:06:39.464553 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:40.465486 kubelet[1908]: E0813 00:06:40.465444 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:41.400860 kubelet[1908]: E0813 00:06:41.400822 1908 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Aug 13 00:06:41.465772 kubelet[1908]: E0813 00:06:41.465737 1908 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"