Jan 29 16:07:01.291289 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 16:07:01.291311 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:07:01.291319 kernel: KASLR enabled Jan 29 16:07:01.291325 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 29 16:07:01.291332 kernel: printk: bootconsole [pl11] enabled Jan 29 16:07:01.291337 kernel: efi: EFI v2.7 by EDK II Jan 29 16:07:01.291344 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 29 16:07:01.291350 kernel: random: crng init done Jan 29 16:07:01.291356 kernel: secureboot: Secure boot disabled Jan 29 16:07:01.291362 kernel: ACPI: Early table checksum verification disabled Jan 29 16:07:01.291368 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 29 16:07:01.291374 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291379 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291387 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 29 16:07:01.291394 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291400 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291407 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291414 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291421 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291427 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291433 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 29 16:07:01.291439 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 16:07:01.291445 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 29 16:07:01.291451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 29 16:07:01.291457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 29 16:07:01.291464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 29 16:07:01.291470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 29 16:07:01.291476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 29 16:07:01.291483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 29 16:07:01.291490 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 29 16:07:01.291496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 29 16:07:01.291502 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 29 16:07:01.291508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 29 16:07:01.291514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 29 16:07:01.291520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 29 16:07:01.291526 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 29 16:07:01.291533 kernel: Zone ranges: Jan 29 16:07:01.291539 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 29 16:07:01.291545 kernel: DMA32 empty Jan 29 16:07:01.291551 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 16:07:01.291561 kernel: Movable zone start for each node Jan 29 16:07:01.291567 kernel: Early memory node ranges Jan 29 16:07:01.291574 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 29 16:07:01.291580 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 29 16:07:01.291587 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 29 16:07:01.291595 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 29 16:07:01.291601 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 29 16:07:01.291608 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 29 16:07:01.291614 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 29 16:07:01.291621 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 29 16:07:01.291627 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 16:07:01.293664 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 29 16:07:01.293672 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 29 16:07:01.293679 kernel: psci: probing for conduit method from ACPI. Jan 29 16:07:01.293686 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 16:07:01.293692 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:07:01.293699 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 29 16:07:01.293710 kernel: psci: SMC Calling Convention v1.4 Jan 29 16:07:01.293717 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 29 16:07:01.293723 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 29 16:07:01.293730 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:07:01.293736 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:07:01.293743 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 16:07:01.293749 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:07:01.293756 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:07:01.293763 kernel: CPU features: detected: Hardware dirty bit management Jan 29 16:07:01.293770 kernel: CPU features: detected: Spectre-BHB Jan 29 16:07:01.293776 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 16:07:01.293784 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 16:07:01.293791 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 16:07:01.293798 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 29 16:07:01.293804 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 16:07:01.293811 kernel: alternatives: applying boot alternatives Jan 29 16:07:01.293819 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:07:01.293826 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:07:01.293833 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:07:01.293839 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:07:01.293846 kernel: Fallback order for Node 0: 0 Jan 29 16:07:01.293852 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 29 16:07:01.293861 kernel: Policy zone: Normal Jan 29 16:07:01.293867 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:07:01.293874 kernel: software IO TLB: area num 2. Jan 29 16:07:01.293880 kernel: software IO TLB: mapped [mem 0x0000000036550000-0x000000003a550000] (64MB) Jan 29 16:07:01.293887 kernel: Memory: 3983652K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 210508K reserved, 0K cma-reserved) Jan 29 16:07:01.293894 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:07:01.293900 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:07:01.293907 kernel: rcu: RCU event tracing is enabled. Jan 29 16:07:01.293914 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:07:01.293921 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:07:01.293927 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:07:01.293935 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:07:01.293942 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:07:01.293948 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:07:01.293955 kernel: GICv3: 960 SPIs implemented Jan 29 16:07:01.293961 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:07:01.293968 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:07:01.293974 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 16:07:01.293981 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 29 16:07:01.293987 kernel: ITS: No ITS available, not enabling LPIs Jan 29 16:07:01.293994 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:07:01.294001 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:07:01.294008 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 16:07:01.294016 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 16:07:01.294023 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 16:07:01.294029 kernel: Console: colour dummy device 80x25 Jan 29 16:07:01.294036 kernel: printk: console [tty1] enabled Jan 29 16:07:01.294043 kernel: ACPI: Core revision 20230628 Jan 29 16:07:01.294050 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 16:07:01.294057 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:07:01.294063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:07:01.294070 kernel: landlock: Up and running. Jan 29 16:07:01.294078 kernel: SELinux: Initializing. Jan 29 16:07:01.294085 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.294092 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.294099 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:07:01.294106 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:07:01.294113 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 29 16:07:01.294120 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 29 16:07:01.294132 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 16:07:01.294139 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:07:01.294147 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:07:01.294154 kernel: Remapping and enabling EFI services. Jan 29 16:07:01.294161 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:07:01.294170 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:07:01.294177 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 29 16:07:01.294190 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:07:01.294197 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 16:07:01.294204 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:07:01.294213 kernel: SMP: Total of 2 processors activated. Jan 29 16:07:01.294220 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:07:01.294227 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 29 16:07:01.294234 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 16:07:01.294241 kernel: CPU features: detected: CRC32 instructions Jan 29 16:07:01.294248 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 16:07:01.294255 kernel: CPU features: detected: LSE atomic instructions Jan 29 16:07:01.294262 kernel: CPU features: detected: Privileged Access Never Jan 29 16:07:01.294269 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:07:01.294278 kernel: alternatives: applying system-wide alternatives Jan 29 16:07:01.294285 kernel: devtmpfs: initialized Jan 29 16:07:01.294292 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:07:01.294300 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:07:01.294307 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:07:01.294314 kernel: SMBIOS 3.1.0 present. Jan 29 16:07:01.294322 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 29 16:07:01.294329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:07:01.294336 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:07:01.294345 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:07:01.294352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:07:01.294359 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:07:01.294367 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jan 29 16:07:01.294373 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:07:01.294380 kernel: cpuidle: using governor menu Jan 29 16:07:01.294387 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:07:01.294395 kernel: ASID allocator initialised with 32768 entries Jan 29 16:07:01.294402 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:07:01.294410 kernel: Serial: AMBA PL011 UART driver Jan 29 16:07:01.294417 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 16:07:01.294424 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 16:07:01.294431 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:07:01.294438 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:07:01.294446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:07:01.294453 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:07:01.294460 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:07:01.294467 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:07:01.294475 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:07:01.294482 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:07:01.294489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:07:01.294497 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:07:01.294504 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:07:01.294511 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:07:01.294518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:07:01.294525 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:07:01.294532 kernel: ACPI: Interpreter enabled Jan 29 16:07:01.294540 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:07:01.294547 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 29 16:07:01.294554 kernel: printk: console [ttyAMA0] enabled Jan 29 16:07:01.294561 kernel: printk: bootconsole [pl11] disabled Jan 29 16:07:01.294569 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 29 16:07:01.294576 kernel: iommu: Default domain type: Translated Jan 29 16:07:01.294583 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:07:01.294590 kernel: efivars: Registered efivars operations Jan 29 16:07:01.294597 kernel: vgaarb: loaded Jan 29 16:07:01.294605 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:07:01.294612 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:07:01.294619 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:07:01.294626 kernel: pnp: PnP ACPI init Jan 29 16:07:01.294643 kernel: pnp: PnP ACPI: found 0 devices Jan 29 16:07:01.294651 kernel: NET: Registered PF_INET protocol family Jan 29 16:07:01.294658 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:07:01.294665 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:07:01.294672 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:07:01.294681 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:07:01.294689 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:07:01.294696 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:07:01.294703 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.294710 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:07:01.294717 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:07:01.294724 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:07:01.294731 kernel: kvm [1]: HYP mode not available Jan 29 16:07:01.294738 kernel: Initialise system trusted keyrings Jan 29 16:07:01.294747 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:07:01.294754 kernel: Key type asymmetric registered Jan 29 16:07:01.294760 kernel: Asymmetric key parser 'x509' registered Jan 29 16:07:01.294767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:07:01.294774 kernel: io scheduler mq-deadline registered Jan 29 16:07:01.294781 kernel: io scheduler kyber registered Jan 29 16:07:01.294788 kernel: io scheduler bfq registered Jan 29 16:07:01.294795 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:07:01.294802 kernel: thunder_xcv, ver 1.0 Jan 29 16:07:01.294811 kernel: thunder_bgx, ver 1.0 Jan 29 16:07:01.294818 kernel: nicpf, ver 1.0 Jan 29 16:07:01.294825 kernel: nicvf, ver 1.0 Jan 29 16:07:01.294987 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:07:01.295060 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:07:00 UTC (1738166820) Jan 29 16:07:01.295070 kernel: efifb: probing for efifb Jan 29 16:07:01.295077 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 16:07:01.295084 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 16:07:01.295094 kernel: efifb: scrolling: redraw Jan 29 16:07:01.295101 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 16:07:01.295108 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:07:01.295115 kernel: fb0: EFI VGA frame buffer device Jan 29 16:07:01.295123 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 29 16:07:01.295130 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:07:01.295137 kernel: No ACPI PMU IRQ for CPU0 Jan 29 16:07:01.295143 kernel: No ACPI PMU IRQ for CPU1 Jan 29 16:07:01.295151 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 29 16:07:01.295160 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:07:01.295167 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:07:01.295174 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:07:01.295181 kernel: Segment Routing with IPv6 Jan 29 16:07:01.295188 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:07:01.295195 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:07:01.295202 kernel: Key type dns_resolver registered Jan 29 16:07:01.295209 kernel: registered taskstats version 1 Jan 29 16:07:01.295216 kernel: Loading compiled-in X.509 certificates Jan 29 16:07:01.295225 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:07:01.295232 kernel: Key type .fscrypt registered Jan 29 16:07:01.295239 kernel: Key type fscrypt-provisioning registered Jan 29 16:07:01.295246 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:07:01.295253 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:07:01.295260 kernel: ima: No architecture policies found Jan 29 16:07:01.295267 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:07:01.295274 kernel: clk: Disabling unused clocks Jan 29 16:07:01.295281 kernel: Freeing unused kernel memory: 38336K Jan 29 16:07:01.295290 kernel: Run /init as init process Jan 29 16:07:01.295297 kernel: with arguments: Jan 29 16:07:01.295304 kernel: /init Jan 29 16:07:01.295311 kernel: with environment: Jan 29 16:07:01.295317 kernel: HOME=/ Jan 29 16:07:01.295324 kernel: TERM=linux Jan 29 16:07:01.295331 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:07:01.295339 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:07:01.295351 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:07:01.295359 systemd[1]: Detected virtualization microsoft. Jan 29 16:07:01.295367 systemd[1]: Detected architecture arm64. Jan 29 16:07:01.295374 systemd[1]: Running in initrd. Jan 29 16:07:01.295381 systemd[1]: No hostname configured, using default hostname. Jan 29 16:07:01.295389 systemd[1]: Hostname set to . Jan 29 16:07:01.295396 systemd[1]: Initializing machine ID from random generator. Jan 29 16:07:01.295404 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:07:01.295413 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:01.295420 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:01.295428 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:07:01.295436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:07:01.295444 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:07:01.295452 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:07:01.295461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:07:01.295470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:07:01.295478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:01.295485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:01.295493 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:07:01.295500 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:07:01.295508 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:07:01.295515 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:07:01.295523 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:07:01.295532 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:07:01.295540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:07:01.295547 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:07:01.295555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:01.295562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:01.295570 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:01.295577 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:07:01.295585 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:07:01.295593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:07:01.295602 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:07:01.295610 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:07:01.295617 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:07:01.295625 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:07:01.301091 systemd-journald[218]: Collecting audit messages is disabled. Jan 29 16:07:01.301126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:01.301135 systemd-journald[218]: Journal started Jan 29 16:07:01.301154 systemd-journald[218]: Runtime Journal (/run/log/journal/5d0b1e122aba49779c4a35f2d07552e8) is 8M, max 78.5M, 70.5M free. Jan 29 16:07:01.301806 systemd-modules-load[220]: Inserted module 'overlay' Jan 29 16:07:01.317050 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:07:01.317671 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:07:01.328937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:01.368326 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:07:01.368349 kernel: Bridge firewalling registered Jan 29 16:07:01.355466 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:07:01.373279 systemd-modules-load[220]: Inserted module 'br_netfilter' Jan 29 16:07:01.374093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:01.390141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:01.410845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:01.419796 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:07:01.443160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:07:01.451836 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:07:01.473480 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:01.487996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:01.495240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:07:01.511996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:01.546855 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:07:01.556833 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:07:01.584772 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:07:01.603872 dracut-cmdline[252]: dracut-dracut-053 Jan 29 16:07:01.603872 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:07:01.608425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:01.622348 systemd-resolved[255]: Positive Trust Anchors: Jan 29 16:07:01.622360 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:07:01.622391 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:07:01.624595 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 29 16:07:01.640008 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:07:01.662487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:01.791655 kernel: SCSI subsystem initialized Jan 29 16:07:01.799659 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:07:01.810704 kernel: iscsi: registered transport (tcp) Jan 29 16:07:01.830156 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:07:01.830176 kernel: QLogic iSCSI HBA Driver Jan 29 16:07:01.863173 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:07:01.881965 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:07:01.913130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:07:01.913175 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:07:01.920322 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:07:01.969653 kernel: raid6: neonx8 gen() 15769 MB/s Jan 29 16:07:01.989640 kernel: raid6: neonx4 gen() 15814 MB/s Jan 29 16:07:02.009640 kernel: raid6: neonx2 gen() 13207 MB/s Jan 29 16:07:02.030640 kernel: raid6: neonx1 gen() 10497 MB/s Jan 29 16:07:02.050640 kernel: raid6: int64x8 gen() 6795 MB/s Jan 29 16:07:02.070640 kernel: raid6: int64x4 gen() 7349 MB/s Jan 29 16:07:02.091640 kernel: raid6: int64x2 gen() 6111 MB/s Jan 29 16:07:02.116086 kernel: raid6: int64x1 gen() 5056 MB/s Jan 29 16:07:02.116098 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Jan 29 16:07:02.141065 kernel: raid6: .... xor() 12398 MB/s, rmw enabled Jan 29 16:07:02.141077 kernel: raid6: using neon recovery algorithm Jan 29 16:07:02.150644 kernel: xor: measuring software checksum speed Jan 29 16:07:02.154642 kernel: 8regs : 20157 MB/sec Jan 29 16:07:02.162865 kernel: 32regs : 20723 MB/sec Jan 29 16:07:02.162881 kernel: arm64_neon : 27965 MB/sec Jan 29 16:07:02.167408 kernel: xor: using function: arm64_neon (27965 MB/sec) Jan 29 16:07:02.218656 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:07:02.230050 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:07:02.254793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:02.283380 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 29 16:07:02.290100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:02.310798 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:07:02.345295 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 29 16:07:02.377116 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:07:02.394804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:07:02.432536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:02.455454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:07:02.483964 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:07:02.499908 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:07:02.523932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:02.540430 kernel: hv_vmbus: Vmbus version:5.3 Jan 29 16:07:02.548645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:07:02.569658 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 16:07:02.569712 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 16:07:02.592516 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 29 16:07:02.592587 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 16:07:02.628128 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 16:07:02.628149 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 16:07:02.628174 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 16:07:02.592962 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:07:02.653787 kernel: PTP clock support registered Jan 29 16:07:02.653808 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 29 16:07:02.620165 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:07:02.640413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:07:02.326625 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 16:07:02.342914 kernel: hv_vmbus: registering driver hv_utils Jan 29 16:07:02.342931 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 16:07:02.342939 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 16:07:02.342948 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 16:07:02.342956 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 16:07:02.342964 kernel: scsi host0: storvsc_host_t Jan 29 16:07:02.343107 kernel: scsi host1: storvsc_host_t Jan 29 16:07:02.343195 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 16:07:02.343217 systemd-journald[218]: Time jumped backwards, rotating. Jan 29 16:07:02.640584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:02.378369 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 16:07:02.378414 kernel: hv_netvsc 0022487a-a28f-0022-487a-a28f0022487a eth0: VF slot 1 added Jan 29 16:07:02.672734 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:02.305070 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 29 16:07:02.315519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:02.424084 kernel: hv_vmbus: registering driver hv_pci Jan 29 16:07:02.424102 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 16:07:02.462091 kernel: hv_pci 4cc0f7ad-855e-49a2-940c-7f8ef3d37a6d: PCI VMBus probing: Using version 0x10004 Jan 29 16:07:02.560865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:07:02.560884 kernel: hv_pci 4cc0f7ad-855e-49a2-940c-7f8ef3d37a6d: PCI host bridge to bus 855e:00 Jan 29 16:07:02.560988 kernel: pci_bus 855e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 29 16:07:02.561159 kernel: pci_bus 855e:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 16:07:02.561241 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 16:07:02.561342 kernel: pci 855e:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 29 16:07:02.561437 kernel: pci 855e:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 16:07:02.561520 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 16:07:02.581122 kernel: pci 855e:00:02.0: enabling Extended Tags Jan 29 16:07:02.581318 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 16:07:02.581436 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 16:07:02.581521 kernel: pci 855e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 855e:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 29 16:07:02.581621 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 16:07:02.581706 kernel: pci_bus 855e:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 16:07:02.581788 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 16:07:02.581871 kernel: pci 855e:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 16:07:02.581961 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:02.581970 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 16:07:02.315741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:02.340766 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:02.371408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:02.379355 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:07:02.404637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:02.460277 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:07:02.562580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:02.653433 kernel: mlx5_core 855e:00:02.0: enabling device (0000 -> 0002) Jan 29 16:07:02.865451 kernel: mlx5_core 855e:00:02.0: firmware version: 16.30.1284 Jan 29 16:07:02.865583 kernel: hv_netvsc 0022487a-a28f-0022-487a-a28f0022487a eth0: VF registering: eth1 Jan 29 16:07:02.865675 kernel: mlx5_core 855e:00:02.0 eth1: joined to eth0 Jan 29 16:07:02.865768 kernel: mlx5_core 855e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 29 16:07:02.877077 kernel: mlx5_core 855e:00:02.0 enP34142s1: renamed from eth1 Jan 29 16:07:03.010970 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 16:07:03.095057 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (494) Jan 29 16:07:03.112553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 16:07:03.139874 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 16:07:03.166460 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (505) Jan 29 16:07:03.185520 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 16:07:03.193656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 16:07:03.227247 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:07:03.256053 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:04.275072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:07:04.276153 disk-uuid[605]: The operation has completed successfully. Jan 29 16:07:04.331809 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:07:04.331899 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:07:04.378167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:07:04.390768 sh[692]: Success Jan 29 16:07:04.419609 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:07:04.586193 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:07:04.605047 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:07:04.614878 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:07:04.646145 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:07:04.646194 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:04.652731 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:07:04.657864 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:07:04.662087 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:07:04.862980 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:07:04.868884 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:07:04.887253 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:07:04.896121 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:07:04.935840 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:04.935895 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:04.940421 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:04.959375 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:04.975137 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:07:04.981268 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:04.986934 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:07:05.004250 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:07:05.011112 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:07:05.034206 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:07:05.070730 systemd-networkd[877]: lo: Link UP Jan 29 16:07:05.070745 systemd-networkd[877]: lo: Gained carrier Jan 29 16:07:05.072434 systemd-networkd[877]: Enumeration completed Jan 29 16:07:05.072539 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:07:05.080688 systemd[1]: Reached target network.target - Network. Jan 29 16:07:05.089961 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:05.089965 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:05.171061 kernel: mlx5_core 855e:00:02.0 enP34142s1: Link up Jan 29 16:07:05.209724 systemd-networkd[877]: enP34142s1: Link UP Jan 29 16:07:05.213999 kernel: hv_netvsc 0022487a-a28f-0022-487a-a28f0022487a eth0: Data path switched to VF: enP34142s1 Jan 29 16:07:05.209807 systemd-networkd[877]: eth0: Link UP Jan 29 16:07:05.209903 systemd-networkd[877]: eth0: Gained carrier Jan 29 16:07:05.209912 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:05.222005 systemd-networkd[877]: enP34142s1: Gained carrier Jan 29 16:07:05.241072 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 16:07:05.793312 ignition[870]: Ignition 2.20.0 Jan 29 16:07:05.793323 ignition[870]: Stage: fetch-offline Jan 29 16:07:05.799347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:07:05.793359 ignition[870]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:05.793366 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:05.793470 ignition[870]: parsed url from cmdline: "" Jan 29 16:07:05.793474 ignition[870]: no config URL provided Jan 29 16:07:05.793479 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:07:05.793487 ignition[870]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:07:05.827213 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:07:05.793494 ignition[870]: failed to fetch config: resource requires networking Jan 29 16:07:05.793669 ignition[870]: Ignition finished successfully Jan 29 16:07:05.850857 ignition[886]: Ignition 2.20.0 Jan 29 16:07:05.850873 ignition[886]: Stage: fetch Jan 29 16:07:05.851083 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:05.851092 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:05.851179 ignition[886]: parsed url from cmdline: "" Jan 29 16:07:05.851182 ignition[886]: no config URL provided Jan 29 16:07:05.851187 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:07:05.851194 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:07:05.851220 ignition[886]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 16:07:05.966221 ignition[886]: GET result: OK Jan 29 16:07:05.966876 ignition[886]: config has been read from IMDS userdata Jan 29 16:07:05.966916 ignition[886]: parsing config with SHA512: 612c63b6a479ea5cbc8b32a8ed27051707a63c4bb515e813c61959c754e8626f2929bc4918cd0fdf76bf6260078c8a829f0605bda69f74242f09a2461ad2d81e Jan 29 16:07:05.971208 unknown[886]: fetched base config from "system" Jan 29 16:07:05.971621 ignition[886]: fetch: fetch complete Jan 29 16:07:05.971215 unknown[886]: fetched base config from "system" Jan 29 16:07:05.971627 ignition[886]: fetch: fetch passed Jan 29 16:07:05.971221 unknown[886]: fetched user config from "azure" Jan 29 16:07:05.971679 ignition[886]: Ignition finished successfully Jan 29 16:07:05.974511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:07:05.997271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:07:06.019691 ignition[893]: Ignition 2.20.0 Jan 29 16:07:06.019701 ignition[893]: Stage: kargs Jan 29 16:07:06.022545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:07:06.019914 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:06.036303 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:07:06.019923 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:06.021022 ignition[893]: kargs: kargs passed Jan 29 16:07:06.065817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:07:06.021080 ignition[893]: Ignition finished successfully Jan 29 16:07:06.074812 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:07:06.061753 ignition[899]: Ignition 2.20.0 Jan 29 16:07:06.086298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:07:06.061760 ignition[899]: Stage: disks Jan 29 16:07:06.097244 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:07:06.061932 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:06.108224 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:07:06.061942 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:06.117448 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:07:06.062957 ignition[899]: disks: disks passed Jan 29 16:07:06.142273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:07:06.063006 ignition[899]: Ignition finished successfully Jan 29 16:07:06.198326 systemd-fsck[908]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 16:07:06.205539 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:07:06.222186 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:07:06.281060 kernel: EXT4-fs (sda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:07:06.282279 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:07:06.287384 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:07:06.325124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:07:06.334947 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:07:06.342216 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:07:06.354756 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:07:06.354799 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:07:06.369006 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:07:06.405289 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:07:06.427760 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (919) Jan 29 16:07:06.427784 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:06.434442 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:06.438682 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:06.445047 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:06.446538 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:07:06.552182 systemd-networkd[877]: eth0: Gained IPv6LL Jan 29 16:07:06.616359 systemd-networkd[877]: enP34142s1: Gained IPv6LL Jan 29 16:07:06.830660 coreos-metadata[921]: Jan 29 16:07:06.830 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 16:07:06.839369 coreos-metadata[921]: Jan 29 16:07:06.839 INFO Fetch successful Jan 29 16:07:06.839369 coreos-metadata[921]: Jan 29 16:07:06.839 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 16:07:06.859313 coreos-metadata[921]: Jan 29 16:07:06.859 INFO Fetch successful Jan 29 16:07:06.873124 coreos-metadata[921]: Jan 29 16:07:06.873 INFO wrote hostname ci-4230.0.0-a-877fd59aac to /sysroot/etc/hostname Jan 29 16:07:06.882765 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:07:07.059421 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:07:07.091881 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:07:07.106689 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:07:07.116116 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:07:07.743975 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:07:07.761157 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:07:07.769239 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:07:07.795208 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:07.794476 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:07:07.811280 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:07:07.825472 ignition[1044]: INFO : Ignition 2.20.0 Jan 29 16:07:07.825472 ignition[1044]: INFO : Stage: mount Jan 29 16:07:07.834829 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:07.834829 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:07.834829 ignition[1044]: INFO : mount: mount passed Jan 29 16:07:07.834829 ignition[1044]: INFO : Ignition finished successfully Jan 29 16:07:07.830973 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:07:07.858210 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:07:07.878246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:07:07.918469 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1052) Jan 29 16:07:07.918531 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:07:07.924597 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:07:07.928889 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:07:07.936067 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:07:07.937568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:07:07.963501 ignition[1069]: INFO : Ignition 2.20.0 Jan 29 16:07:07.969169 ignition[1069]: INFO : Stage: files Jan 29 16:07:07.969169 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:07.969169 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:07.969169 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:07:07.992741 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:07:07.992741 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:07:08.033942 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:07:08.041549 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:07:08.041549 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:07:08.034338 unknown[1069]: wrote ssh authorized keys file for user: core Jan 29 16:07:08.061593 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:07:08.061593 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 16:07:08.107701 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:07:08.265553 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:07:08.265553 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:07:08.286409 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 16:07:08.723640 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:07:08.800442 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:07:08.811283 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 16:07:09.223897 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:07:09.444336 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:07:09.444336 ignition[1069]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:07:09.464279 ignition[1069]: INFO : files: files passed Jan 29 16:07:09.464279 ignition[1069]: INFO : Ignition finished successfully Jan 29 16:07:09.463613 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:07:09.507296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:07:09.523227 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:07:09.547758 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:07:09.613090 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:09.613090 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:09.547853 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:07:09.638455 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:07:09.558383 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:07:09.568513 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:07:09.591291 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:07:09.640568 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:07:09.640684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:07:09.654392 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:07:09.667326 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:07:09.679346 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:07:09.692533 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:07:09.732214 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:07:09.757257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:07:09.778610 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:07:09.778716 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:07:09.791492 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:09.804580 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:09.817734 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:07:09.829749 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:07:09.829821 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:07:09.846505 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:07:09.858733 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:07:09.868917 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:07:09.879554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:07:09.892024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:07:09.904859 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:07:09.916392 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:07:09.928059 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:07:09.940487 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:07:09.950957 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:07:09.960850 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:07:09.960921 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:07:09.976023 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:09.982437 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:09.995008 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:07:10.000440 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:10.007402 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:07:10.007470 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:07:10.025416 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:07:10.025472 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:07:10.040131 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:07:10.040178 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:07:10.051123 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:07:10.051175 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:07:10.118361 ignition[1123]: INFO : Ignition 2.20.0 Jan 29 16:07:10.118361 ignition[1123]: INFO : Stage: umount Jan 29 16:07:10.118361 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:07:10.118361 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 16:07:10.118361 ignition[1123]: INFO : umount: umount passed Jan 29 16:07:10.118361 ignition[1123]: INFO : Ignition finished successfully Jan 29 16:07:10.085216 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:07:10.103336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:07:10.103423 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:10.121247 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:07:10.126390 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:07:10.126455 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:10.133249 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:07:10.133301 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:07:10.156347 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:07:10.156454 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:07:10.169018 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:07:10.169091 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:07:10.183885 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:07:10.183945 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:07:10.202133 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:07:10.202193 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:07:10.213633 systemd[1]: Stopped target network.target - Network. Jan 29 16:07:10.226111 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:07:10.226186 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:07:10.238501 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:07:10.249009 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:07:10.249134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:10.261783 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:07:10.272071 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:07:10.284211 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:07:10.284348 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:07:10.294246 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:07:10.294282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:07:10.306323 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:07:10.306377 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:07:10.318548 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:07:10.318599 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:07:10.329417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:07:10.339774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:07:10.352557 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:07:10.358231 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:07:10.567177 kernel: hv_netvsc 0022487a-a28f-0022-487a-a28f0022487a eth0: Data path switched from VF: enP34142s1 Jan 29 16:07:10.358340 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:07:10.374452 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:07:10.374676 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:07:10.374786 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:07:10.393898 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:07:10.394714 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:07:10.394766 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:10.418272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:07:10.423727 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:07:10.423801 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:07:10.430777 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:07:10.430825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:10.446007 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:07:10.446069 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:10.452156 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:07:10.452198 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:10.472712 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:10.483733 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:07:10.483805 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:07:10.506291 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:07:10.506694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:10.518992 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:07:10.519176 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:10.530340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:07:10.530383 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:10.550345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:07:10.550405 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:07:10.560916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:07:10.560983 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:07:10.579164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:07:10.579223 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:07:10.623259 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:07:10.637758 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:07:10.637830 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:10.852454 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 29 16:07:10.657192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:10.659238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:10.676364 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:07:10.676433 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:07:10.676796 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:07:10.676904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:07:10.686519 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:07:10.686616 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:07:10.697490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:07:10.697584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:07:10.709941 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:07:10.721512 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:07:10.721604 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:07:10.752270 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:07:10.772787 systemd[1]: Switching root. Jan 29 16:07:10.937351 systemd-journald[218]: Journal stopped Jan 29 16:07:14.461587 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:07:14.461609 kernel: SELinux: policy capability open_perms=1 Jan 29 16:07:14.461618 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:07:14.461626 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:07:14.461635 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:07:14.461643 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:07:14.461651 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:07:14.461661 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:07:14.461669 kernel: audit: type=1403 audit(1738166831.502:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:07:14.461678 systemd[1]: Successfully loaded SELinux policy in 120.895ms. Jan 29 16:07:14.461690 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.300ms. Jan 29 16:07:14.461699 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:07:14.461708 systemd[1]: Detected virtualization microsoft. Jan 29 16:07:14.461716 systemd[1]: Detected architecture arm64. Jan 29 16:07:14.461725 systemd[1]: Detected first boot. Jan 29 16:07:14.461735 systemd[1]: Hostname set to . Jan 29 16:07:14.461744 systemd[1]: Initializing machine ID from random generator. Jan 29 16:07:14.461752 zram_generator::config[1167]: No configuration found. Jan 29 16:07:14.461761 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:07:14.461770 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:07:14.461779 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:07:14.461787 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:07:14.461797 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:07:14.461806 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:07:14.461815 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:07:14.461824 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:07:14.461833 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:07:14.461842 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:07:14.461851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:07:14.461862 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:07:14.461871 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:07:14.461880 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:07:14.461889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:07:14.461898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:07:14.461907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:07:14.461915 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:07:14.461924 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:07:14.461935 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:07:14.461944 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 16:07:14.461953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:07:14.461964 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:07:14.461973 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:07:14.461983 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:07:14.461991 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:07:14.462000 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:07:14.462011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:07:14.462020 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:07:14.462040 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:07:14.462051 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:07:14.462062 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:07:14.462071 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:07:14.462082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:07:14.462092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:07:14.462101 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:07:14.462110 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:07:14.462119 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:07:14.462128 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:07:14.462137 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:07:14.462148 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:07:14.462157 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:07:14.462166 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:07:14.462176 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:07:14.462185 systemd[1]: Reached target machines.target - Containers. Jan 29 16:07:14.462194 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:07:14.462203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:14.462212 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:07:14.462223 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:07:14.462232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:14.462241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:07:14.462250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:14.462261 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:07:14.462270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:14.462279 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:07:14.462289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:07:14.462300 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:07:14.462309 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:07:14.462318 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:07:14.462327 kernel: loop: module loaded Jan 29 16:07:14.462336 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:14.462345 kernel: fuse: init (API version 7.39) Jan 29 16:07:14.462354 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:07:14.462362 kernel: ACPI: bus type drm_connector registered Jan 29 16:07:14.462371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:07:14.462397 systemd-journald[1271]: Collecting audit messages is disabled. Jan 29 16:07:14.462417 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:07:14.462427 systemd-journald[1271]: Journal started Jan 29 16:07:14.462449 systemd-journald[1271]: Runtime Journal (/run/log/journal/32fcde4e7004494cbb48055b208d4a72) is 8M, max 78.5M, 70.5M free. Jan 29 16:07:13.633506 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:07:13.641891 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:07:13.642290 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:07:13.642599 systemd[1]: systemd-journald.service: Consumed 3.339s CPU time. Jan 29 16:07:14.491619 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:07:14.506630 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:07:14.521780 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:07:14.530708 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:07:14.530757 systemd[1]: Stopped verity-setup.service. Jan 29 16:07:14.547835 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:07:14.548709 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:07:14.554513 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:07:14.560579 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:07:14.565737 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:07:14.571711 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:07:14.577768 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:07:14.583176 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:07:14.589677 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:07:14.596806 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:07:14.596964 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:07:14.603414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:14.603564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:14.610416 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:07:14.610570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:07:14.616510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:14.616653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:14.624497 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:07:14.624640 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:07:14.630581 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:14.630725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:14.636758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:07:14.643086 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:07:14.650110 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:07:14.657849 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:07:14.665043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:07:14.679816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:07:14.691098 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:07:14.697810 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:07:14.703792 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:07:14.703827 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:07:14.710008 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:07:14.717625 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:07:14.724627 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:07:14.730055 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:14.733224 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:07:14.741885 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:07:14.748330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:07:14.749347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:07:14.755445 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:07:14.756375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:07:14.764244 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:07:14.774231 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:07:14.781236 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:07:14.791563 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:07:14.800142 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:07:14.808699 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:07:14.820170 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:07:14.820374 systemd-journald[1271]: Time spent on flushing to /var/log/journal/32fcde4e7004494cbb48055b208d4a72 is 43.059ms for 917 entries. Jan 29 16:07:14.820374 systemd-journald[1271]: System Journal (/var/log/journal/32fcde4e7004494cbb48055b208d4a72) is 11.8M, max 2.6G, 2.6G free. Jan 29 16:07:14.925267 systemd-journald[1271]: Received client request to flush runtime journal. Jan 29 16:07:14.925312 kernel: loop0: detected capacity change from 0 to 113512 Jan 29 16:07:14.925341 systemd-journald[1271]: /var/log/journal/32fcde4e7004494cbb48055b208d4a72/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 29 16:07:14.925365 systemd-journald[1271]: Rotating system journal. Jan 29 16:07:14.833389 udevadm[1310]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:07:14.834490 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:07:14.852258 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:07:14.882068 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:07:14.896074 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:07:14.909149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:07:14.928069 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:07:14.960933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:07:14.962093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:07:14.974241 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jan 29 16:07:14.974257 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Jan 29 16:07:14.979073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:07:15.163051 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:07:15.186058 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 16:07:15.237055 kernel: loop2: detected capacity change from 0 to 28720 Jan 29 16:07:15.721063 kernel: loop3: detected capacity change from 0 to 123192 Jan 29 16:07:15.847148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:07:15.868233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:07:15.891488 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 29 16:07:15.982082 kernel: loop4: detected capacity change from 0 to 113512 Jan 29 16:07:15.992049 kernel: loop5: detected capacity change from 0 to 189592 Jan 29 16:07:16.005066 kernel: loop6: detected capacity change from 0 to 28720 Jan 29 16:07:16.015062 kernel: loop7: detected capacity change from 0 to 123192 Jan 29 16:07:16.020586 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 16:07:16.021005 (sd-merge)[1335]: Merged extensions into '/usr'. Jan 29 16:07:16.024919 systemd[1]: Reload requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:07:16.025085 systemd[1]: Reloading... Jan 29 16:07:16.099066 zram_generator::config[1363]: No configuration found. Jan 29 16:07:16.282155 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:16.313070 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:07:16.337057 kernel: hv_vmbus: registering driver hv_balloon Jan 29 16:07:16.348335 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 16:07:16.348419 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 29 16:07:16.370062 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 16:07:16.385661 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 16:07:16.385747 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 16:07:16.382765 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 16:07:16.382956 systemd[1]: Reloading finished in 357 ms. Jan 29 16:07:16.392537 kernel: Console: switching to colour dummy device 80x25 Jan 29 16:07:16.396095 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 16:07:16.408225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:07:16.415903 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:07:16.441053 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1395) Jan 29 16:07:16.454375 systemd[1]: Starting ensure-sysext.service... Jan 29 16:07:16.464127 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:07:16.490853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:07:16.504155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:16.552632 systemd[1]: Reload requested from client PID 1484 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:07:16.552647 systemd[1]: Reloading... Jan 29 16:07:16.583471 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:07:16.583692 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:07:16.584332 systemd-tmpfiles[1494]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:07:16.584543 systemd-tmpfiles[1494]: ACLs are not supported, ignoring. Jan 29 16:07:16.584594 systemd-tmpfiles[1494]: ACLs are not supported, ignoring. Jan 29 16:07:16.612997 systemd-tmpfiles[1494]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:07:16.613011 systemd-tmpfiles[1494]: Skipping /boot Jan 29 16:07:16.630660 systemd-tmpfiles[1494]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:07:16.630674 systemd-tmpfiles[1494]: Skipping /boot Jan 29 16:07:16.635234 zram_generator::config[1551]: No configuration found. Jan 29 16:07:16.740636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:16.837375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 16:07:16.844465 systemd[1]: Reloading finished in 291 ms. Jan 29 16:07:16.868143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:07:16.892235 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:07:16.907265 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:07:16.913667 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:07:16.919883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:16.921316 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:07:16.930368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:16.938371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:16.951490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:16.957736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:16.959412 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:07:16.966935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:16.970253 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:07:16.981103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:07:16.993977 lvm[1615]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:07:16.996719 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:07:17.009521 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:07:17.016266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:07:17.016431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:17.025808 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:17.034645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:07:17.051400 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:07:17.059814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:17.060735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:17.072537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:17.072717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:17.082558 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:17.082760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:17.093614 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:07:17.105287 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:07:17.113365 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:07:17.115821 augenrules[1655]: No rules Jan 29 16:07:17.124395 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:07:17.125704 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:07:17.133076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:07:17.152536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:07:17.159730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:17.168251 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:07:17.175632 lvm[1669]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:07:17.182378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:17.200380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:17.211275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:17.217442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:17.217743 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:17.223075 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:07:17.231946 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:07:17.239566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:17.239846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:17.247442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:17.247858 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:17.255585 systemd-resolved[1627]: Positive Trust Anchors: Jan 29 16:07:17.255606 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:07:17.255637 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:07:17.256019 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:17.256373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:17.270384 systemd-networkd[1485]: lo: Link UP Jan 29 16:07:17.270400 systemd-networkd[1485]: lo: Gained carrier Jan 29 16:07:17.272834 systemd-networkd[1485]: Enumeration completed Jan 29 16:07:17.273184 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:17.273194 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:17.275395 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:07:17.285785 systemd[1]: Finished ensure-sysext.service. Jan 29 16:07:17.294010 systemd-resolved[1627]: Using system hostname 'ci-4230.0.0-a-877fd59aac'. Jan 29 16:07:17.299194 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:07:17.307614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:07:17.309304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:07:17.317564 augenrules[1680]: /sbin/augenrules: No change Jan 29 16:07:17.323642 augenrules[1698]: No rules Jan 29 16:07:17.327183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:07:17.335200 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:07:17.343225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:07:17.349372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:07:17.349607 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:07:17.353260 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:07:17.356992 kernel: mlx5_core 855e:00:02.0 enP34142s1: Link up Jan 29 16:07:17.364209 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:07:17.371170 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:07:17.377665 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:07:17.378606 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:07:17.386675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:07:17.386896 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:07:17.400536 kernel: hv_netvsc 0022487a-a28f-0022-487a-a28f0022487a eth0: Data path switched to VF: enP34142s1 Jan 29 16:07:17.399998 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:07:17.400229 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:07:17.403393 systemd-networkd[1485]: enP34142s1: Link UP Jan 29 16:07:17.403481 systemd-networkd[1485]: eth0: Link UP Jan 29 16:07:17.403484 systemd-networkd[1485]: eth0: Gained carrier Jan 29 16:07:17.403498 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:17.406297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:07:17.413460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:07:17.414204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:07:17.416018 systemd-networkd[1485]: enP34142s1: Gained carrier Jan 29 16:07:17.422649 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:07:17.422856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:07:17.431288 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 16:07:17.432855 systemd[1]: Reached target network.target - Network. Jan 29 16:07:17.438633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:07:17.446278 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:07:17.446353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:07:17.475236 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:07:17.523148 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:07:17.530268 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:07:19.122638 ldconfig[1302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:07:19.132599 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:07:19.143253 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:07:19.157186 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:07:19.163555 systemd-networkd[1485]: enP34142s1: Gained IPv6LL Jan 29 16:07:19.164608 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:07:19.170512 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:07:19.177693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:07:19.184698 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:07:19.190903 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:07:19.197891 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:07:19.204961 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:07:19.205003 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:07:19.210233 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:07:19.216532 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:07:19.224585 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:07:19.231832 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:07:19.238956 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:07:19.245752 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:07:19.261744 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:07:19.268427 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:07:19.275373 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:07:19.281128 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:07:19.286338 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:07:19.291322 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:07:19.291351 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:07:19.306135 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 16:07:19.313927 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:07:19.328256 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:07:19.335222 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:07:19.343468 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:07:19.350342 (chronyd)[1721]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 16:07:19.352349 systemd-networkd[1485]: eth0: Gained IPv6LL Jan 29 16:07:19.354216 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:07:19.359734 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:07:19.359780 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 16:07:19.360857 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 16:07:19.366760 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 16:07:19.369761 jq[1728]: false Jan 29 16:07:19.370221 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:07:19.379624 KVP[1730]: KVP starting; pid is:1730 Jan 29 16:07:19.383212 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:07:19.391234 KVP[1730]: KVP LIC Version: 3.1 Jan 29 16:07:19.392239 kernel: hv_utils: KVP IC version 4.0 Jan 29 16:07:19.393740 chronyd[1736]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 16:07:19.395249 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:07:19.408167 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:07:19.418341 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:07:19.419708 extend-filesystems[1729]: Found loop4 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found loop5 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found loop6 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found loop7 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda1 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda2 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda3 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found usr Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda4 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda6 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda7 Jan 29 16:07:19.435310 extend-filesystems[1729]: Found sda9 Jan 29 16:07:19.435310 extend-filesystems[1729]: Checking size of /dev/sda9 Jan 29 16:07:19.427837 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:07:19.426218 chronyd[1736]: Timezone right/UTC failed leap second check, ignoring Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.562 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.570 INFO Fetch successful Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.570 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.570 INFO Fetch successful Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.570 INFO Fetching http://168.63.129.16/machine/c352516e-e7a7-4691-91cc-7f2f4ac00caf/b23611fb%2D6323%2D4466%2D88c3%2D438e617c1870.%5Fci%2D4230.0.0%2Da%2D877fd59aac?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.572 INFO Fetch successful Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.572 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 16:07:19.609308 coreos-metadata[1723]: Jan 29 16:07:19.588 INFO Fetch successful Jan 29 16:07:19.609667 extend-filesystems[1729]: Old size kept for /dev/sda9 Jan 29 16:07:19.609667 extend-filesystems[1729]: Found sr0 Jan 29 16:07:19.428990 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:07:19.426356 chronyd[1736]: Loaded seccomp filter (level 2) Jan 29 16:07:19.441154 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:07:19.443451 dbus-daemon[1727]: [system] SELinux support is enabled Jan 29 16:07:19.672207 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1765) Jan 29 16:07:19.672233 update_engine[1749]: I20250129 16:07:19.530218 1749 main.cc:92] Flatcar Update Engine starting Jan 29 16:07:19.672233 update_engine[1749]: I20250129 16:07:19.532553 1749 update_check_scheduler.cc:74] Next update check in 11m57s Jan 29 16:07:19.458159 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:07:19.639198 dbus-daemon[1727]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:07:19.672572 jq[1751]: true Jan 29 16:07:19.474138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:07:19.491346 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:07:19.507402 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 16:07:19.525129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:07:19.526070 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:07:19.679158 jq[1776]: true Jan 29 16:07:19.526324 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:07:19.526489 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:07:19.541274 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:07:19.541458 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:07:19.561403 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:07:19.561604 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:07:19.562183 systemd-logind[1742]: New seat seat0. Jan 29 16:07:19.573661 systemd-logind[1742]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 29 16:07:19.588202 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:07:19.623137 (ntainerd)[1777]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:07:19.638428 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:07:19.673144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:19.690408 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:07:19.697246 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:07:19.697367 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:07:19.708373 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:07:19.708393 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:07:19.737888 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:07:19.763117 tar[1757]: linux-arm64/helm Jan 29 16:07:19.767881 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:07:19.780772 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:07:19.784016 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:07:19.794232 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:07:19.924625 bash[1861]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:07:19.928099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:07:19.938432 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:07:20.002586 sshd_keygen[1743]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:07:20.040393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:07:20.046266 locksmithd[1857]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:07:20.058331 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:07:20.070232 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 16:07:20.080196 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:07:20.080884 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:07:20.098385 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:07:20.121232 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:07:20.139322 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:07:20.155390 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 16:07:20.164853 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:07:20.177246 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 16:07:20.267154 containerd[1777]: time="2025-01-29T16:07:20.266723120Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:07:20.307918 containerd[1777]: time="2025-01-29T16:07:20.307717760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309121320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309148600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309164200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309300040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309316680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309376160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309387360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309581200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309594600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309606880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310240 containerd[1777]: time="2025-01-29T16:07:20.309615240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310462 containerd[1777]: time="2025-01-29T16:07:20.309683960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310462 containerd[1777]: time="2025-01-29T16:07:20.309881120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310462 containerd[1777]: time="2025-01-29T16:07:20.310006240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:07:20.310462 containerd[1777]: time="2025-01-29T16:07:20.310019040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:07:20.310462 containerd[1777]: time="2025-01-29T16:07:20.310100240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:07:20.310462 containerd[1777]: time="2025-01-29T16:07:20.310144840Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:07:20.327299 containerd[1777]: time="2025-01-29T16:07:20.327266200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:07:20.327444 containerd[1777]: time="2025-01-29T16:07:20.327430360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:07:20.327567 containerd[1777]: time="2025-01-29T16:07:20.327553800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:07:20.327657 containerd[1777]: time="2025-01-29T16:07:20.327644400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:07:20.327765 containerd[1777]: time="2025-01-29T16:07:20.327747400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:07:20.327967 containerd[1777]: time="2025-01-29T16:07:20.327950600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328247160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328352760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328367800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328381680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328395880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328408880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328420880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328434200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328453080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328465720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328477120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328489720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328509480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330683 containerd[1777]: time="2025-01-29T16:07:20.328522880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328534200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328546720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328557880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328570560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328581880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328594120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328606360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328621040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328631920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328645600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328657080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328671200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328690360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328702840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.330973 containerd[1777]: time="2025-01-29T16:07:20.328713440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328761440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328779000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328789280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328801040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328809840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328822280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328831720Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:07:20.331236 containerd[1777]: time="2025-01-29T16:07:20.328841120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:07:20.331369 containerd[1777]: time="2025-01-29T16:07:20.329120040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:07:20.331369 containerd[1777]: time="2025-01-29T16:07:20.329167400Z" level=info msg="Connect containerd service" Jan 29 16:07:20.331369 containerd[1777]: time="2025-01-29T16:07:20.329200400Z" level=info msg="using legacy CRI server" Jan 29 16:07:20.331369 containerd[1777]: time="2025-01-29T16:07:20.329206640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:07:20.331369 containerd[1777]: time="2025-01-29T16:07:20.329320640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:07:20.331369 containerd[1777]: time="2025-01-29T16:07:20.329820040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:07:20.333298 containerd[1777]: time="2025-01-29T16:07:20.333262000Z" level=info msg="Start subscribing containerd event" Jan 29 16:07:20.333656 containerd[1777]: time="2025-01-29T16:07:20.333555360Z" level=info msg="Start recovering state" Jan 29 16:07:20.333994 containerd[1777]: time="2025-01-29T16:07:20.333978480Z" level=info msg="Start event monitor" Jan 29 16:07:20.334078 containerd[1777]: time="2025-01-29T16:07:20.334065320Z" level=info msg="Start snapshots syncer" Jan 29 16:07:20.334192 containerd[1777]: time="2025-01-29T16:07:20.334178640Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:07:20.334691 containerd[1777]: time="2025-01-29T16:07:20.334549560Z" level=info msg="Start streaming server" Jan 29 16:07:20.335119 containerd[1777]: time="2025-01-29T16:07:20.335102200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:07:20.336304 containerd[1777]: time="2025-01-29T16:07:20.335815160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:07:20.336433 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:07:20.344323 containerd[1777]: time="2025-01-29T16:07:20.343538200Z" level=info msg="containerd successfully booted in 0.081147s" Jan 29 16:07:20.398583 tar[1757]: linux-arm64/LICENSE Jan 29 16:07:20.398703 tar[1757]: linux-arm64/README.md Jan 29 16:07:20.409915 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:07:20.564923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:20.572198 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:20.574283 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:07:20.580662 systemd[1]: Startup finished in 663ms (kernel) + 11.046s (initrd) + 9.197s (userspace) = 20.907s. Jan 29 16:07:20.845562 login[1893]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:20.850703 login[1894]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:20.854974 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:07:20.860316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:07:20.870585 systemd-logind[1742]: New session 2 of user core. Jan 29 16:07:20.874507 systemd-logind[1742]: New session 1 of user core. Jan 29 16:07:20.882428 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:07:20.890371 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:07:20.904026 (systemd)[1921]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:07:20.908525 systemd-logind[1742]: New session c1 of user core. Jan 29 16:07:20.968007 kubelet[1909]: E0129 16:07:20.967961 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:20.971317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:20.971448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:20.972368 systemd[1]: kubelet.service: Consumed 667ms CPU time, 233M memory peak. Jan 29 16:07:21.097662 systemd[1921]: Queued start job for default target default.target. Jan 29 16:07:21.109861 systemd[1921]: Created slice app.slice - User Application Slice. Jan 29 16:07:21.109886 systemd[1921]: Reached target paths.target - Paths. Jan 29 16:07:21.109920 systemd[1921]: Reached target timers.target - Timers. Jan 29 16:07:21.111126 systemd[1921]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:07:21.121887 systemd[1921]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:07:21.121951 systemd[1921]: Reached target sockets.target - Sockets. Jan 29 16:07:21.121990 systemd[1921]: Reached target basic.target - Basic System. Jan 29 16:07:21.122018 systemd[1921]: Reached target default.target - Main User Target. Jan 29 16:07:21.122069 systemd[1921]: Startup finished in 205ms. Jan 29 16:07:21.122522 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:07:21.131263 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:07:21.132089 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:07:21.526784 waagent[1895]: 2025-01-29T16:07:21.526647Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 16:07:21.532447 waagent[1895]: 2025-01-29T16:07:21.532391Z INFO Daemon Daemon OS: flatcar 4230.0.0 Jan 29 16:07:21.536672 waagent[1895]: 2025-01-29T16:07:21.536629Z INFO Daemon Daemon Python: 3.11.11 Jan 29 16:07:21.540789 waagent[1895]: 2025-01-29T16:07:21.540742Z INFO Daemon Daemon Run daemon Jan 29 16:07:21.544515 waagent[1895]: 2025-01-29T16:07:21.544475Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.0.0' Jan 29 16:07:21.553122 waagent[1895]: 2025-01-29T16:07:21.553063Z INFO Daemon Daemon Using waagent for provisioning Jan 29 16:07:21.558230 waagent[1895]: 2025-01-29T16:07:21.558185Z INFO Daemon Daemon Activate resource disk Jan 29 16:07:21.562628 waagent[1895]: 2025-01-29T16:07:21.562586Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 16:07:21.574732 waagent[1895]: 2025-01-29T16:07:21.574682Z INFO Daemon Daemon Found device: None Jan 29 16:07:21.579706 waagent[1895]: 2025-01-29T16:07:21.578840Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 16:07:21.587092 waagent[1895]: 2025-01-29T16:07:21.587048Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 16:07:21.598266 waagent[1895]: 2025-01-29T16:07:21.598211Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 16:07:21.603624 waagent[1895]: 2025-01-29T16:07:21.603582Z INFO Daemon Daemon Running default provisioning handler Jan 29 16:07:21.614698 waagent[1895]: 2025-01-29T16:07:21.614084Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 16:07:21.626941 waagent[1895]: 2025-01-29T16:07:21.626888Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 16:07:21.636181 waagent[1895]: 2025-01-29T16:07:21.636138Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 16:07:21.640832 waagent[1895]: 2025-01-29T16:07:21.640793Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 16:07:21.723054 waagent[1895]: 2025-01-29T16:07:21.719823Z INFO Daemon Daemon Successfully mounted dvd Jan 29 16:07:21.750903 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 16:07:21.753775 waagent[1895]: 2025-01-29T16:07:21.753698Z INFO Daemon Daemon Detect protocol endpoint Jan 29 16:07:21.758341 waagent[1895]: 2025-01-29T16:07:21.758295Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 16:07:21.763688 waagent[1895]: 2025-01-29T16:07:21.763645Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 16:07:21.770032 waagent[1895]: 2025-01-29T16:07:21.769988Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 16:07:21.775107 waagent[1895]: 2025-01-29T16:07:21.775067Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 16:07:21.779892 waagent[1895]: 2025-01-29T16:07:21.779822Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 16:07:21.825753 waagent[1895]: 2025-01-29T16:07:21.825711Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 16:07:21.832333 waagent[1895]: 2025-01-29T16:07:21.832306Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 16:07:21.837431 waagent[1895]: 2025-01-29T16:07:21.837386Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 16:07:21.986020 waagent[1895]: 2025-01-29T16:07:21.985919Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 16:07:21.992618 waagent[1895]: 2025-01-29T16:07:21.992559Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 16:07:22.001947 waagent[1895]: 2025-01-29T16:07:22.001902Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 16:07:22.022264 waagent[1895]: 2025-01-29T16:07:22.022223Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 16:07:22.027841 waagent[1895]: 2025-01-29T16:07:22.027799Z INFO Daemon Jan 29 16:07:22.030498 waagent[1895]: 2025-01-29T16:07:22.030433Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 51f9a576-b0c6-4fe2-a000-5441cd64949a eTag: 18354664136586275781 source: Fabric] Jan 29 16:07:22.041712 waagent[1895]: 2025-01-29T16:07:22.041672Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 16:07:22.048121 waagent[1895]: 2025-01-29T16:07:22.048080Z INFO Daemon Jan 29 16:07:22.050747 waagent[1895]: 2025-01-29T16:07:22.050707Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 16:07:22.063602 waagent[1895]: 2025-01-29T16:07:22.063571Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 16:07:22.150629 waagent[1895]: 2025-01-29T16:07:22.150545Z INFO Daemon Downloaded certificate {'thumbprint': '40C137E0AE397FF4C318F7F20C5BB3906F86D8F2', 'hasPrivateKey': True} Jan 29 16:07:22.159951 waagent[1895]: 2025-01-29T16:07:22.159908Z INFO Daemon Downloaded certificate {'thumbprint': '0D15501D7EFC064DBE87A159D823B253523E8AC7', 'hasPrivateKey': False} Jan 29 16:07:22.169646 waagent[1895]: 2025-01-29T16:07:22.169599Z INFO Daemon Fetch goal state completed Jan 29 16:07:22.181413 waagent[1895]: 2025-01-29T16:07:22.181365Z INFO Daemon Daemon Starting provisioning Jan 29 16:07:22.186201 waagent[1895]: 2025-01-29T16:07:22.186156Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 16:07:22.190619 waagent[1895]: 2025-01-29T16:07:22.190581Z INFO Daemon Daemon Set hostname [ci-4230.0.0-a-877fd59aac] Jan 29 16:07:22.209024 waagent[1895]: 2025-01-29T16:07:22.208954Z INFO Daemon Daemon Publish hostname [ci-4230.0.0-a-877fd59aac] Jan 29 16:07:22.215209 waagent[1895]: 2025-01-29T16:07:22.215159Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 16:07:22.221290 waagent[1895]: 2025-01-29T16:07:22.221245Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 16:07:22.232944 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:07:22.232950 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:07:22.233000 systemd-networkd[1485]: eth0: DHCP lease lost Jan 29 16:07:22.234157 waagent[1895]: 2025-01-29T16:07:22.234017Z INFO Daemon Daemon Create user account if not exists Jan 29 16:07:22.239375 waagent[1895]: 2025-01-29T16:07:22.239323Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 16:07:22.244803 waagent[1895]: 2025-01-29T16:07:22.244761Z INFO Daemon Daemon Configure sudoer Jan 29 16:07:22.249260 waagent[1895]: 2025-01-29T16:07:22.249210Z INFO Daemon Daemon Configure sshd Jan 29 16:07:22.253829 waagent[1895]: 2025-01-29T16:07:22.253516Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 16:07:22.266184 waagent[1895]: 2025-01-29T16:07:22.266130Z INFO Daemon Daemon Deploy ssh public key. Jan 29 16:07:22.276122 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 16:07:23.355243 waagent[1895]: 2025-01-29T16:07:23.355196Z INFO Daemon Daemon Provisioning complete Jan 29 16:07:23.374380 waagent[1895]: 2025-01-29T16:07:23.374336Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 16:07:23.380297 waagent[1895]: 2025-01-29T16:07:23.380254Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 16:07:23.389265 waagent[1895]: 2025-01-29T16:07:23.389224Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 16:07:23.517067 waagent[1978]: 2025-01-29T16:07:23.516736Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 16:07:23.517067 waagent[1978]: 2025-01-29T16:07:23.516872Z INFO ExtHandler ExtHandler OS: flatcar 4230.0.0 Jan 29 16:07:23.517067 waagent[1978]: 2025-01-29T16:07:23.516926Z INFO ExtHandler ExtHandler Python: 3.11.11 Jan 29 16:07:23.535061 waagent[1978]: 2025-01-29T16:07:23.534387Z INFO ExtHandler ExtHandler Distro: flatcar-4230.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 16:07:23.535061 waagent[1978]: 2025-01-29T16:07:23.534574Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:07:23.535061 waagent[1978]: 2025-01-29T16:07:23.534633Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:07:23.542376 waagent[1978]: 2025-01-29T16:07:23.542321Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 16:07:23.547696 waagent[1978]: 2025-01-29T16:07:23.547656Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 16:07:23.548144 waagent[1978]: 2025-01-29T16:07:23.548100Z INFO ExtHandler Jan 29 16:07:23.548219 waagent[1978]: 2025-01-29T16:07:23.548187Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f4b0aabd-5213-4ee6-995a-491e56ebcd19 eTag: 18354664136586275781 source: Fabric] Jan 29 16:07:23.548504 waagent[1978]: 2025-01-29T16:07:23.548465Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 16:07:23.549039 waagent[1978]: 2025-01-29T16:07:23.548995Z INFO ExtHandler Jan 29 16:07:23.549117 waagent[1978]: 2025-01-29T16:07:23.549086Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 16:07:23.552697 waagent[1978]: 2025-01-29T16:07:23.552665Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 16:07:23.627006 waagent[1978]: 2025-01-29T16:07:23.626873Z INFO ExtHandler Downloaded certificate {'thumbprint': '40C137E0AE397FF4C318F7F20C5BB3906F86D8F2', 'hasPrivateKey': True} Jan 29 16:07:23.627387 waagent[1978]: 2025-01-29T16:07:23.627342Z INFO ExtHandler Downloaded certificate {'thumbprint': '0D15501D7EFC064DBE87A159D823B253523E8AC7', 'hasPrivateKey': False} Jan 29 16:07:23.627780 waagent[1978]: 2025-01-29T16:07:23.627739Z INFO ExtHandler Fetch goal state completed Jan 29 16:07:23.645372 waagent[1978]: 2025-01-29T16:07:23.645312Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1978 Jan 29 16:07:23.645517 waagent[1978]: 2025-01-29T16:07:23.645481Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 16:07:23.647117 waagent[1978]: 2025-01-29T16:07:23.647074Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 16:07:23.647494 waagent[1978]: 2025-01-29T16:07:23.647459Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 16:07:23.664009 waagent[1978]: 2025-01-29T16:07:23.663964Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 16:07:23.664215 waagent[1978]: 2025-01-29T16:07:23.664174Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 16:07:23.669431 waagent[1978]: 2025-01-29T16:07:23.669398Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 16:07:23.674993 systemd[1]: Reload requested from client PID 1993 ('systemctl') (unit waagent.service)... Jan 29 16:07:23.675009 systemd[1]: Reloading... Jan 29 16:07:23.746089 zram_generator::config[2035]: No configuration found. Jan 29 16:07:23.842487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:23.943450 systemd[1]: Reloading finished in 268 ms. Jan 29 16:07:23.960252 waagent[1978]: 2025-01-29T16:07:23.957933Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 16:07:23.963922 systemd[1]: Reload requested from client PID 2086 ('systemctl') (unit waagent.service)... Jan 29 16:07:23.963934 systemd[1]: Reloading... Jan 29 16:07:24.039342 zram_generator::config[2125]: No configuration found. Jan 29 16:07:24.143221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:07:24.244264 systemd[1]: Reloading finished in 279 ms. Jan 29 16:07:24.261074 waagent[1978]: 2025-01-29T16:07:24.260539Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 16:07:24.261074 waagent[1978]: 2025-01-29T16:07:24.260703Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 16:07:24.494382 waagent[1978]: 2025-01-29T16:07:24.494263Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 16:07:24.495067 waagent[1978]: 2025-01-29T16:07:24.495000Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 16:07:24.495895 waagent[1978]: 2025-01-29T16:07:24.495843Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 16:07:24.496004 waagent[1978]: 2025-01-29T16:07:24.495958Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:07:24.496321 waagent[1978]: 2025-01-29T16:07:24.496084Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:07:24.496513 waagent[1978]: 2025-01-29T16:07:24.496455Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 16:07:24.496803 waagent[1978]: 2025-01-29T16:07:24.496751Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 16:07:24.497196 waagent[1978]: 2025-01-29T16:07:24.497148Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 16:07:24.497275 waagent[1978]: 2025-01-29T16:07:24.497241Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 16:07:24.497415 waagent[1978]: 2025-01-29T16:07:24.497377Z INFO EnvHandler ExtHandler Configure routes Jan 29 16:07:24.497477 waagent[1978]: 2025-01-29T16:07:24.497450Z INFO EnvHandler ExtHandler Gateway:None Jan 29 16:07:24.497528 waagent[1978]: 2025-01-29T16:07:24.497502Z INFO EnvHandler ExtHandler Routes:None Jan 29 16:07:24.497793 waagent[1978]: 2025-01-29T16:07:24.497736Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 16:07:24.497957 waagent[1978]: 2025-01-29T16:07:24.497860Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 16:07:24.498579 waagent[1978]: 2025-01-29T16:07:24.498518Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 16:07:24.498787 waagent[1978]: 2025-01-29T16:07:24.498710Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 16:07:24.499270 waagent[1978]: 2025-01-29T16:07:24.499198Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 16:07:24.499270 waagent[1978]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 16:07:24.499270 waagent[1978]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 16:07:24.499270 waagent[1978]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 16:07:24.499270 waagent[1978]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:07:24.499270 waagent[1978]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:07:24.499270 waagent[1978]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 16:07:24.501078 waagent[1978]: 2025-01-29T16:07:24.500110Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 16:07:24.508206 waagent[1978]: 2025-01-29T16:07:24.508168Z INFO ExtHandler ExtHandler Jan 29 16:07:24.508368 waagent[1978]: 2025-01-29T16:07:24.508334Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 73e09d51-f872-478f-b30d-55b1076c7955 correlation 26a56d68-84b4-4b51-bd2d-767417213c80 created: 2025-01-29T16:06:20.958260Z] Jan 29 16:07:24.508829 waagent[1978]: 2025-01-29T16:07:24.508785Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 16:07:24.509641 waagent[1978]: 2025-01-29T16:07:24.509601Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 29 16:07:24.544566 waagent[1978]: 2025-01-29T16:07:24.544491Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 16:07:24.544566 waagent[1978]: Executing ['ip', '-a', '-o', 'link']: Jan 29 16:07:24.544566 waagent[1978]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 16:07:24.544566 waagent[1978]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:a2:8f brd ff:ff:ff:ff:ff:ff Jan 29 16:07:24.544566 waagent[1978]: 3: enP34142s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:a2:8f brd ff:ff:ff:ff:ff:ff\ altname enP34142p0s2 Jan 29 16:07:24.544566 waagent[1978]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 16:07:24.544566 waagent[1978]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 16:07:24.544566 waagent[1978]: 2: eth0 inet 10.200.20.42/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 16:07:24.544566 waagent[1978]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 16:07:24.544566 waagent[1978]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 16:07:24.544566 waagent[1978]: 2: eth0 inet6 fe80::222:48ff:fe7a:a28f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 16:07:24.544566 waagent[1978]: 3: enP34142s1 inet6 fe80::222:48ff:fe7a:a28f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 16:07:24.557747 waagent[1978]: 2025-01-29T16:07:24.557604Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0A72D731-BBF7-41FE-99E3-DD4557C8F715;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 16:07:24.571064 waagent[1978]: 2025-01-29T16:07:24.570603Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 16:07:24.571064 waagent[1978]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:24.571064 waagent[1978]: pkts bytes target prot opt in out source destination Jan 29 16:07:24.571064 waagent[1978]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:24.571064 waagent[1978]: pkts bytes target prot opt in out source destination Jan 29 16:07:24.571064 waagent[1978]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:24.571064 waagent[1978]: pkts bytes target prot opt in out source destination Jan 29 16:07:24.571064 waagent[1978]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 16:07:24.571064 waagent[1978]: 4 593 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 16:07:24.571064 waagent[1978]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 16:07:24.574074 waagent[1978]: 2025-01-29T16:07:24.573760Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 16:07:24.574074 waagent[1978]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:24.574074 waagent[1978]: pkts bytes target prot opt in out source destination Jan 29 16:07:24.574074 waagent[1978]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:24.574074 waagent[1978]: pkts bytes target prot opt in out source destination Jan 29 16:07:24.574074 waagent[1978]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 16:07:24.574074 waagent[1978]: pkts bytes target prot opt in out source destination Jan 29 16:07:24.574074 waagent[1978]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 16:07:24.574074 waagent[1978]: 8 1008 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 16:07:24.574074 waagent[1978]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 16:07:24.574283 waagent[1978]: 2025-01-29T16:07:24.574021Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 16:07:31.222229 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:07:31.230256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:31.318655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:31.321619 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:31.357752 kubelet[2218]: E0129 16:07:31.357702 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:31.360601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:31.360760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:31.361290 systemd[1]: kubelet.service: Consumed 107ms CPU time, 94.6M memory peak. Jan 29 16:07:41.456839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:07:41.462208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:41.543478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:41.547423 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:41.580592 kubelet[2233]: E0129 16:07:41.580550 2233 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:41.582900 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:41.583060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:41.583355 systemd[1]: kubelet.service: Consumed 110ms CPU time, 91.8M memory peak. Jan 29 16:07:43.221908 chronyd[1736]: Selected source PHC0 Jan 29 16:07:51.706895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:07:51.714296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:07:51.795919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:07:51.799539 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:07:51.832840 kubelet[2248]: E0129 16:07:51.832788 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:07:51.834659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:07:51.834780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:07:51.835390 systemd[1]: kubelet.service: Consumed 111ms CPU time, 94.1M memory peak. Jan 29 16:07:57.916676 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:07:57.925474 systemd[1]: Started sshd@0-10.200.20.42:22-10.200.16.10:40396.service - OpenSSH per-connection server daemon (10.200.16.10:40396). Jan 29 16:07:58.470615 sshd[2255]: Accepted publickey for core from 10.200.16.10 port 40396 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:07:58.471805 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:58.475987 systemd-logind[1742]: New session 3 of user core. Jan 29 16:07:58.481161 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:07:58.865119 systemd[1]: Started sshd@1-10.200.20.42:22-10.200.16.10:40402.service - OpenSSH per-connection server daemon (10.200.16.10:40402). Jan 29 16:07:59.311834 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 40402 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:07:59.313119 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:59.318557 systemd-logind[1742]: New session 4 of user core. Jan 29 16:07:59.323168 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:07:59.634949 sshd[2262]: Connection closed by 10.200.16.10 port 40402 Jan 29 16:07:59.635463 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Jan 29 16:07:59.638683 systemd[1]: sshd@1-10.200.20.42:22-10.200.16.10:40402.service: Deactivated successfully. Jan 29 16:07:59.640355 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:07:59.641172 systemd-logind[1742]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:07:59.641977 systemd-logind[1742]: Removed session 4. Jan 29 16:07:59.712269 systemd[1]: Started sshd@2-10.200.20.42:22-10.200.16.10:40406.service - OpenSSH per-connection server daemon (10.200.16.10:40406). Jan 29 16:08:00.150165 sshd[2268]: Accepted publickey for core from 10.200.16.10 port 40406 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:00.151381 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:00.155321 systemd-logind[1742]: New session 5 of user core. Jan 29 16:08:00.163159 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:08:00.499217 sshd[2270]: Connection closed by 10.200.16.10 port 40406 Jan 29 16:08:00.499138 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:00.501847 systemd-logind[1742]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:08:00.502002 systemd[1]: sshd@2-10.200.20.42:22-10.200.16.10:40406.service: Deactivated successfully. Jan 29 16:08:00.503455 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:08:00.506349 systemd-logind[1742]: Removed session 5. Jan 29 16:08:00.587424 systemd[1]: Started sshd@3-10.200.20.42:22-10.200.16.10:40408.service - OpenSSH per-connection server daemon (10.200.16.10:40408). Jan 29 16:08:01.021672 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 40408 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:01.022990 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:01.026860 systemd-logind[1742]: New session 6 of user core. Jan 29 16:08:01.036182 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:08:01.337054 sshd[2278]: Connection closed by 10.200.16.10 port 40408 Jan 29 16:08:01.337701 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:01.341104 systemd[1]: sshd@3-10.200.20.42:22-10.200.16.10:40408.service: Deactivated successfully. Jan 29 16:08:01.342694 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:08:01.343427 systemd-logind[1742]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:08:01.344396 systemd-logind[1742]: Removed session 6. Jan 29 16:08:01.419281 systemd[1]: Started sshd@4-10.200.20.42:22-10.200.16.10:40416.service - OpenSSH per-connection server daemon (10.200.16.10:40416). Jan 29 16:08:01.875826 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 40416 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:01.877109 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:01.878177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:08:01.885269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:01.890109 systemd-logind[1742]: New session 7 of user core. Jan 29 16:08:01.892137 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:08:01.976110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:01.994313 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:02.030629 kubelet[2295]: E0129 16:08:02.030571 2295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:02.032892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:02.033049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:02.033539 systemd[1]: kubelet.service: Consumed 120ms CPU time, 94.4M memory peak. Jan 29 16:08:02.488327 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:08:02.488594 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:02.513749 sudo[2302]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:02.584786 sshd[2289]: Connection closed by 10.200.16.10 port 40416 Jan 29 16:08:02.585501 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:02.588980 systemd[1]: sshd@4-10.200.20.42:22-10.200.16.10:40416.service: Deactivated successfully. Jan 29 16:08:02.590664 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:08:02.591526 systemd-logind[1742]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:08:02.592567 systemd-logind[1742]: Removed session 7. Jan 29 16:08:02.670152 systemd[1]: Started sshd@5-10.200.20.42:22-10.200.16.10:40422.service - OpenSSH per-connection server daemon (10.200.16.10:40422). Jan 29 16:08:03.126096 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 40422 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:03.127371 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:03.132236 systemd-logind[1742]: New session 8 of user core. Jan 29 16:08:03.139174 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:08:03.382115 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:08:03.382370 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:03.385755 sudo[2312]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:03.389739 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:08:03.389976 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:03.403727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:08:03.423550 augenrules[2334]: No rules Jan 29 16:08:03.424488 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:08:03.424687 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:08:03.425795 sudo[2311]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:03.498065 sshd[2310]: Connection closed by 10.200.16.10 port 40422 Jan 29 16:08:03.498659 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:03.501874 systemd[1]: sshd@5-10.200.20.42:22-10.200.16.10:40422.service: Deactivated successfully. Jan 29 16:08:03.503491 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:08:03.504292 systemd-logind[1742]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:08:03.505013 systemd-logind[1742]: Removed session 8. Jan 29 16:08:03.584258 systemd[1]: Started sshd@6-10.200.20.42:22-10.200.16.10:40428.service - OpenSSH per-connection server daemon (10.200.16.10:40428). Jan 29 16:08:04.016415 sshd[2343]: Accepted publickey for core from 10.200.16.10 port 40428 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:08:04.017550 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:08:04.021408 systemd-logind[1742]: New session 9 of user core. Jan 29 16:08:04.033150 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:08:04.262682 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:08:04.262937 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:08:04.457044 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 29 16:08:04.674075 update_engine[1749]: I20250129 16:08:04.673945 1749 update_attempter.cc:509] Updating boot flags... Jan 29 16:08:04.734102 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2374) Jan 29 16:08:05.246348 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:08:05.246368 (dockerd)[2427]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:08:06.144091 dockerd[2427]: time="2025-01-29T16:08:06.144018905Z" level=info msg="Starting up" Jan 29 16:08:06.448543 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3053803101-merged.mount: Deactivated successfully. Jan 29 16:08:06.504846 dockerd[2427]: time="2025-01-29T16:08:06.504799006Z" level=info msg="Loading containers: start." Jan 29 16:08:06.678079 kernel: Initializing XFRM netlink socket Jan 29 16:08:06.789419 systemd-networkd[1485]: docker0: Link UP Jan 29 16:08:06.824116 dockerd[2427]: time="2025-01-29T16:08:06.824072702Z" level=info msg="Loading containers: done." Jan 29 16:08:06.833867 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2572148475-merged.mount: Deactivated successfully. Jan 29 16:08:06.843129 dockerd[2427]: time="2025-01-29T16:08:06.843094086Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:08:06.843209 dockerd[2427]: time="2025-01-29T16:08:06.843176006Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:08:06.843296 dockerd[2427]: time="2025-01-29T16:08:06.843273126Z" level=info msg="Daemon has completed initialization" Jan 29 16:08:06.898076 dockerd[2427]: time="2025-01-29T16:08:06.897934401Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:08:06.898554 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:08:07.832008 containerd[1777]: time="2025-01-29T16:08:07.831964874Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:08:08.764917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064286816.mount: Deactivated successfully. Jan 29 16:08:11.383096 containerd[1777]: time="2025-01-29T16:08:11.383040064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:11.389769 containerd[1777]: time="2025-01-29T16:08:11.389713459Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 29 16:08:11.394649 containerd[1777]: time="2025-01-29T16:08:11.394578536Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:11.400012 containerd[1777]: time="2025-01-29T16:08:11.399938012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:11.401356 containerd[1777]: time="2025-01-29T16:08:11.401164372Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 3.569163338s" Jan 29 16:08:11.401356 containerd[1777]: time="2025-01-29T16:08:11.401202332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 16:08:11.401935 containerd[1777]: time="2025-01-29T16:08:11.401853211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:08:12.206714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:08:12.215300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:12.307063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:12.311365 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:12.374630 kubelet[2671]: E0129 16:08:12.374575 2671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:12.376894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:12.377026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:12.377557 systemd[1]: kubelet.service: Consumed 121ms CPU time, 98.5M memory peak. Jan 29 16:08:13.647647 containerd[1777]: time="2025-01-29T16:08:13.647590477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:13.650758 containerd[1777]: time="2025-01-29T16:08:13.650614235Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 29 16:08:13.654609 containerd[1777]: time="2025-01-29T16:08:13.654554712Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:13.662878 containerd[1777]: time="2025-01-29T16:08:13.662814585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:13.664140 containerd[1777]: time="2025-01-29T16:08:13.663943064Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.261854693s" Jan 29 16:08:13.664140 containerd[1777]: time="2025-01-29T16:08:13.663983184Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 16:08:13.664703 containerd[1777]: time="2025-01-29T16:08:13.664676744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:08:15.455072 containerd[1777]: time="2025-01-29T16:08:15.454564797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:15.456898 containerd[1777]: time="2025-01-29T16:08:15.456843515Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 29 16:08:15.461723 containerd[1777]: time="2025-01-29T16:08:15.461677590Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:15.468279 containerd[1777]: time="2025-01-29T16:08:15.468218704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:15.469594 containerd[1777]: time="2025-01-29T16:08:15.469412263Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.804700719s" Jan 29 16:08:15.469594 containerd[1777]: time="2025-01-29T16:08:15.469448103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 16:08:15.470054 containerd[1777]: time="2025-01-29T16:08:15.470010143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:08:16.593258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275673555.mount: Deactivated successfully. Jan 29 16:08:17.038144 containerd[1777]: time="2025-01-29T16:08:17.038091388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:17.041649 containerd[1777]: time="2025-01-29T16:08:17.041598105Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 29 16:08:17.045291 containerd[1777]: time="2025-01-29T16:08:17.045241462Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:17.049557 containerd[1777]: time="2025-01-29T16:08:17.049501698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:17.050544 containerd[1777]: time="2025-01-29T16:08:17.050372017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.580231155s" Jan 29 16:08:17.050544 containerd[1777]: time="2025-01-29T16:08:17.050406097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 16:08:17.051163 containerd[1777]: time="2025-01-29T16:08:17.050859417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:08:17.825659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439923488.mount: Deactivated successfully. Jan 29 16:08:19.453314 containerd[1777]: time="2025-01-29T16:08:19.453255700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:19.460294 containerd[1777]: time="2025-01-29T16:08:19.459853814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 29 16:08:19.463436 containerd[1777]: time="2025-01-29T16:08:19.463383490Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:19.468782 containerd[1777]: time="2025-01-29T16:08:19.468723166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:19.471902 containerd[1777]: time="2025-01-29T16:08:19.471853643Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.420963026s" Jan 29 16:08:19.472200 containerd[1777]: time="2025-01-29T16:08:19.472056643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 16:08:19.474549 containerd[1777]: time="2025-01-29T16:08:19.474435600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:08:20.451201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036688537.mount: Deactivated successfully. Jan 29 16:08:20.479808 containerd[1777]: time="2025-01-29T16:08:20.479739681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:20.483016 containerd[1777]: time="2025-01-29T16:08:20.482807678Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 29 16:08:20.491891 containerd[1777]: time="2025-01-29T16:08:20.491836710Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:20.498426 containerd[1777]: time="2025-01-29T16:08:20.498338944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:20.499306 containerd[1777]: time="2025-01-29T16:08:20.499171263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.024687663s" Jan 29 16:08:20.499306 containerd[1777]: time="2025-01-29T16:08:20.499204823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 16:08:20.500051 containerd[1777]: time="2025-01-29T16:08:20.499866143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:08:21.206395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126609063.mount: Deactivated successfully. Jan 29 16:08:22.456735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:08:22.467276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:22.543843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:22.546924 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:22.579144 kubelet[2799]: E0129 16:08:22.579068 2799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:22.581150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:22.581292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:22.581688 systemd[1]: kubelet.service: Consumed 108ms CPU time, 94.2M memory peak. Jan 29 16:08:25.599511 containerd[1777]: time="2025-01-29T16:08:25.599451977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:25.601910 containerd[1777]: time="2025-01-29T16:08:25.601867335Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 29 16:08:25.605052 containerd[1777]: time="2025-01-29T16:08:25.604973732Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:25.610371 containerd[1777]: time="2025-01-29T16:08:25.610311808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:25.611595 containerd[1777]: time="2025-01-29T16:08:25.611464967Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.111553704s" Jan 29 16:08:25.611595 containerd[1777]: time="2025-01-29T16:08:25.611498487Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 16:08:30.236323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:30.236471 systemd[1]: kubelet.service: Consumed 108ms CPU time, 94.2M memory peak. Jan 29 16:08:30.243372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:30.269346 systemd[1]: Reload requested from client PID 2838 ('systemctl') (unit session-9.scope)... Jan 29 16:08:30.269359 systemd[1]: Reloading... Jan 29 16:08:30.372066 zram_generator::config[2885]: No configuration found. Jan 29 16:08:30.469401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:30.569117 systemd[1]: Reloading finished in 299 ms. Jan 29 16:08:30.781958 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 16:08:30.782082 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 16:08:30.782329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:30.782378 systemd[1]: kubelet.service: Consumed 65ms CPU time, 81.3M memory peak. Jan 29 16:08:30.792714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:30.871393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:30.874470 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:08:30.908248 kubelet[2951]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:08:30.908248 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:08:30.908248 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:08:30.909201 kubelet[2951]: I0129 16:08:30.908612 2951 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:08:31.469131 kubelet[2951]: I0129 16:08:31.469091 2951 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:08:31.469131 kubelet[2951]: I0129 16:08:31.469122 2951 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:08:31.469394 kubelet[2951]: I0129 16:08:31.469376 2951 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:08:31.489535 kubelet[2951]: E0129 16:08:31.489489 2951 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:31.490426 kubelet[2951]: I0129 16:08:31.490298 2951 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:08:31.497008 kubelet[2951]: E0129 16:08:31.496952 2951 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:08:31.497132 kubelet[2951]: I0129 16:08:31.497121 2951 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:08:31.500933 kubelet[2951]: I0129 16:08:31.500852 2951 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:08:31.501774 kubelet[2951]: I0129 16:08:31.501735 2951 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:08:31.502376 kubelet[2951]: I0129 16:08:31.501960 2951 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:08:31.502376 kubelet[2951]: I0129 16:08:31.501982 2951 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-a-877fd59aac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:08:31.502376 kubelet[2951]: I0129 16:08:31.502171 2951 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:08:31.502376 kubelet[2951]: I0129 16:08:31.502179 2951 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:08:31.502538 kubelet[2951]: I0129 16:08:31.502274 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:08:31.503914 kubelet[2951]: I0129 16:08:31.503899 2951 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:08:31.504244 kubelet[2951]: I0129 16:08:31.504231 2951 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:08:31.504334 kubelet[2951]: I0129 16:08:31.504325 2951 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:08:31.504390 kubelet[2951]: I0129 16:08:31.504382 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:08:31.508560 kubelet[2951]: W0129 16:08:31.508507 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-877fd59aac&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:31.508624 kubelet[2951]: E0129 16:08:31.508562 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-877fd59aac&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:31.508899 kubelet[2951]: W0129 16:08:31.508846 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:31.508899 kubelet[2951]: E0129 16:08:31.508893 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:31.509123 kubelet[2951]: I0129 16:08:31.508972 2951 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:08:31.510856 kubelet[2951]: I0129 16:08:31.510815 2951 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:08:31.511296 kubelet[2951]: W0129 16:08:31.511276 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:08:31.512989 kubelet[2951]: I0129 16:08:31.512323 2951 server.go:1269] "Started kubelet" Jan 29 16:08:31.512989 kubelet[2951]: I0129 16:08:31.512503 2951 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:08:31.512989 kubelet[2951]: I0129 16:08:31.512557 2951 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:08:31.512989 kubelet[2951]: I0129 16:08:31.512812 2951 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:08:31.517714 kubelet[2951]: I0129 16:08:31.517690 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:08:31.518280 kubelet[2951]: E0129 16:08:31.516663 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.0-a-877fd59aac.181f3598c907565a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-877fd59aac,UID:ci-4230.0.0-a-877fd59aac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-877fd59aac,},FirstTimestamp:2025-01-29 16:08:31.512303194 +0000 UTC m=+0.635304409,LastTimestamp:2025-01-29 16:08:31.512303194 +0000 UTC m=+0.635304409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-877fd59aac,}" Jan 29 16:08:31.521868 kubelet[2951]: I0129 16:08:31.521847 2951 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:08:31.522601 kubelet[2951]: I0129 16:08:31.522566 2951 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:08:31.522785 kubelet[2951]: E0129 16:08:31.522757 2951 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.0-a-877fd59aac\" not found" Jan 29 16:08:31.523808 kubelet[2951]: I0129 16:08:31.523789 2951 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:08:31.524351 kubelet[2951]: I0129 16:08:31.524337 2951 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:08:31.525672 kubelet[2951]: I0129 16:08:31.524002 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:08:31.526152 kubelet[2951]: I0129 16:08:31.526134 2951 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:08:31.526304 kubelet[2951]: I0129 16:08:31.526288 2951 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:08:31.526746 kubelet[2951]: E0129 16:08:31.526692 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-877fd59aac?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="200ms" Jan 29 16:08:31.526865 kubelet[2951]: W0129 16:08:31.526826 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:31.526899 kubelet[2951]: E0129 16:08:31.526867 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:31.528477 kubelet[2951]: I0129 16:08:31.528457 2951 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:08:31.533740 kubelet[2951]: E0129 16:08:31.533719 2951 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:08:31.546857 kubelet[2951]: I0129 16:08:31.546837 2951 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:08:31.546980 kubelet[2951]: I0129 16:08:31.546969 2951 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:08:31.547069 kubelet[2951]: I0129 16:08:31.547061 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:08:31.549289 kubelet[2951]: I0129 16:08:31.549181 2951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:08:31.552321 kubelet[2951]: I0129 16:08:31.550394 2951 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:08:31.552321 kubelet[2951]: I0129 16:08:31.550411 2951 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:08:31.552321 kubelet[2951]: I0129 16:08:31.550423 2951 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:08:31.552321 kubelet[2951]: E0129 16:08:31.550573 2951 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:08:31.552321 kubelet[2951]: W0129 16:08:31.551217 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:31.552321 kubelet[2951]: E0129 16:08:31.551276 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:31.554800 kubelet[2951]: I0129 16:08:31.554783 2951 policy_none.go:49] "None policy: Start" Jan 29 16:08:31.555381 kubelet[2951]: I0129 16:08:31.555368 2951 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:08:31.555616 kubelet[2951]: I0129 16:08:31.555518 2951 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:08:31.563783 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:08:31.577802 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:08:31.580362 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:08:31.587733 kubelet[2951]: I0129 16:08:31.587708 2951 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:08:31.587888 kubelet[2951]: I0129 16:08:31.587868 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:08:31.587920 kubelet[2951]: I0129 16:08:31.587887 2951 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:08:31.588587 kubelet[2951]: I0129 16:08:31.588516 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:08:31.589716 kubelet[2951]: E0129 16:08:31.589701 2951 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.0.0-a-877fd59aac\" not found" Jan 29 16:08:31.660376 systemd[1]: Created slice kubepods-burstable-podf75df63d9653d184d7b28a8585759f8b.slice - libcontainer container kubepods-burstable-podf75df63d9653d184d7b28a8585759f8b.slice. Jan 29 16:08:31.677088 systemd[1]: Created slice kubepods-burstable-podcbb163458df871aa73ed68032eff7d3f.slice - libcontainer container kubepods-burstable-podcbb163458df871aa73ed68032eff7d3f.slice. Jan 29 16:08:31.680866 systemd[1]: Created slice kubepods-burstable-pod659727a4fc449c7b57fa8894a5de5d9b.slice - libcontainer container kubepods-burstable-pod659727a4fc449c7b57fa8894a5de5d9b.slice. Jan 29 16:08:31.689815 kubelet[2951]: I0129 16:08:31.689788 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.690137 kubelet[2951]: E0129 16:08:31.690113 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.727748 kubelet[2951]: E0129 16:08:31.727663 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-877fd59aac?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="400ms" Jan 29 16:08:31.826125 kubelet[2951]: I0129 16:08:31.826087 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826247 kubelet[2951]: I0129 16:08:31.826136 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f75df63d9653d184d7b28a8585759f8b-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-a-877fd59aac\" (UID: \"f75df63d9653d184d7b28a8585759f8b\") " pod="kube-system/kube-scheduler-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826247 kubelet[2951]: I0129 16:08:31.826154 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbb163458df871aa73ed68032eff7d3f-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" (UID: \"cbb163458df871aa73ed68032eff7d3f\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826247 kubelet[2951]: I0129 16:08:31.826170 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbb163458df871aa73ed68032eff7d3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" (UID: \"cbb163458df871aa73ed68032eff7d3f\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826247 kubelet[2951]: I0129 16:08:31.826185 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826247 kubelet[2951]: I0129 16:08:31.826202 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826347 kubelet[2951]: I0129 16:08:31.826216 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826347 kubelet[2951]: I0129 16:08:31.826230 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.826347 kubelet[2951]: I0129 16:08:31.826243 2951 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbb163458df871aa73ed68032eff7d3f-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" (UID: \"cbb163458df871aa73ed68032eff7d3f\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.892500 kubelet[2951]: I0129 16:08:31.892476 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.892797 kubelet[2951]: E0129 16:08:31.892752 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:31.975668 containerd[1777]: time="2025-01-29T16:08:31.975573081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-a-877fd59aac,Uid:f75df63d9653d184d7b28a8585759f8b,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:31.980236 containerd[1777]: time="2025-01-29T16:08:31.980154678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-a-877fd59aac,Uid:cbb163458df871aa73ed68032eff7d3f,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:31.983868 containerd[1777]: time="2025-01-29T16:08:31.983681036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-a-877fd59aac,Uid:659727a4fc449c7b57fa8894a5de5d9b,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:32.128362 kubelet[2951]: E0129 16:08:32.128321 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-877fd59aac?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="800ms" Jan 29 16:08:32.294361 kubelet[2951]: I0129 16:08:32.294255 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:32.294757 kubelet[2951]: E0129 16:08:32.294729 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:32.542054 kubelet[2951]: E0129 16:08:32.541938 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.0.0-a-877fd59aac.181f3598c907565a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.0.0-a-877fd59aac,UID:ci-4230.0.0-a-877fd59aac,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.0.0-a-877fd59aac,},FirstTimestamp:2025-01-29 16:08:31.512303194 +0000 UTC m=+0.635304409,LastTimestamp:2025-01-29 16:08:31.512303194 +0000 UTC m=+0.635304409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.0.0-a-877fd59aac,}" Jan 29 16:08:32.720520 kubelet[2951]: W0129 16:08:32.720129 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:32.720520 kubelet[2951]: E0129 16:08:32.720175 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:32.725693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702426563.mount: Deactivated successfully. Jan 29 16:08:32.754077 kubelet[2951]: W0129 16:08:32.754003 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:32.754218 kubelet[2951]: E0129 16:08:32.754198 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:32.768168 containerd[1777]: time="2025-01-29T16:08:32.768124574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:32.782277 containerd[1777]: time="2025-01-29T16:08:32.782224685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 16:08:32.786455 containerd[1777]: time="2025-01-29T16:08:32.786422323Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:32.792056 containerd[1777]: time="2025-01-29T16:08:32.791842920Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:32.799443 containerd[1777]: time="2025-01-29T16:08:32.799186755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:08:32.802770 containerd[1777]: time="2025-01-29T16:08:32.801962794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:32.806656 containerd[1777]: time="2025-01-29T16:08:32.806621431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:08:32.807541 containerd[1777]: time="2025-01-29T16:08:32.807515471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 831.87159ms" Jan 29 16:08:32.809955 containerd[1777]: time="2025-01-29T16:08:32.809896669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:08:32.814814 containerd[1777]: time="2025-01-29T16:08:32.814676546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 830.94359ms" Jan 29 16:08:32.829049 containerd[1777]: time="2025-01-29T16:08:32.828997458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 848.38506ms" Jan 29 16:08:32.883935 kubelet[2951]: W0129 16:08:32.883858 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:32.883935 kubelet[2951]: E0129 16:08:32.883908 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:32.930283 kubelet[2951]: E0129 16:08:32.930184 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.0.0-a-877fd59aac?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="1.6s" Jan 29 16:08:32.968629 kubelet[2951]: W0129 16:08:32.968532 2951 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-877fd59aac&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Jan 29 16:08:32.968629 kubelet[2951]: E0129 16:08:32.968598 2951 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.0.0-a-877fd59aac&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:33.097380 kubelet[2951]: I0129 16:08:33.096907 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:33.097380 kubelet[2951]: E0129 16:08:33.097269 2951 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:33.475379 containerd[1777]: time="2025-01-29T16:08:33.475289237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:33.475986 containerd[1777]: time="2025-01-29T16:08:33.475663917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:33.476219 containerd[1777]: time="2025-01-29T16:08:33.476134277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:33.476592 containerd[1777]: time="2025-01-29T16:08:33.476419956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:33.476816 containerd[1777]: time="2025-01-29T16:08:33.476600356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:33.476924 containerd[1777]: time="2025-01-29T16:08:33.476713676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:33.476924 containerd[1777]: time="2025-01-29T16:08:33.476749316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:33.477373 containerd[1777]: time="2025-01-29T16:08:33.477259276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:33.482763 containerd[1777]: time="2025-01-29T16:08:33.482567393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:33.482763 containerd[1777]: time="2025-01-29T16:08:33.482618393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:33.482763 containerd[1777]: time="2025-01-29T16:08:33.482634033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:33.482763 containerd[1777]: time="2025-01-29T16:08:33.482698033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:33.502206 systemd[1]: Started cri-containerd-888041d167aaa7cf917d00071eaf0ad0630066a672f268f48c3b2b8beaca86b4.scope - libcontainer container 888041d167aaa7cf917d00071eaf0ad0630066a672f268f48c3b2b8beaca86b4. Jan 29 16:08:33.503882 systemd[1]: Started cri-containerd-e60f45aa2615ba426a0c708a50ed06d393c38ba667e64dfe2540621c67520942.scope - libcontainer container e60f45aa2615ba426a0c708a50ed06d393c38ba667e64dfe2540621c67520942. Jan 29 16:08:33.510726 systemd[1]: Started cri-containerd-e8004b319ed6f73d1b155e9eebdfeda9e33fed8835efa3b2e4633a62c72496bc.scope - libcontainer container e8004b319ed6f73d1b155e9eebdfeda9e33fed8835efa3b2e4633a62c72496bc. Jan 29 16:08:33.546208 containerd[1777]: time="2025-01-29T16:08:33.546076395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.0.0-a-877fd59aac,Uid:f75df63d9653d184d7b28a8585759f8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"888041d167aaa7cf917d00071eaf0ad0630066a672f268f48c3b2b8beaca86b4\"" Jan 29 16:08:33.553110 containerd[1777]: time="2025-01-29T16:08:33.552924271Z" level=info msg="CreateContainer within sandbox \"888041d167aaa7cf917d00071eaf0ad0630066a672f268f48c3b2b8beaca86b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:08:33.555653 containerd[1777]: time="2025-01-29T16:08:33.555622350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.0.0-a-877fd59aac,Uid:cbb163458df871aa73ed68032eff7d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8004b319ed6f73d1b155e9eebdfeda9e33fed8835efa3b2e4633a62c72496bc\"" Jan 29 16:08:33.560085 containerd[1777]: time="2025-01-29T16:08:33.559998667Z" level=info msg="CreateContainer within sandbox \"e8004b319ed6f73d1b155e9eebdfeda9e33fed8835efa3b2e4633a62c72496bc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:08:33.562236 containerd[1777]: time="2025-01-29T16:08:33.562201186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.0.0-a-877fd59aac,Uid:659727a4fc449c7b57fa8894a5de5d9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e60f45aa2615ba426a0c708a50ed06d393c38ba667e64dfe2540621c67520942\"" Jan 29 16:08:33.564541 containerd[1777]: time="2025-01-29T16:08:33.564509985Z" level=info msg="CreateContainer within sandbox \"e60f45aa2615ba426a0c708a50ed06d393c38ba667e64dfe2540621c67520942\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:08:33.623395 containerd[1777]: time="2025-01-29T16:08:33.623228070Z" level=info msg="CreateContainer within sandbox \"888041d167aaa7cf917d00071eaf0ad0630066a672f268f48c3b2b8beaca86b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0d778be223daea0d9a801fde15575b3928370512219d93c0f738757254d74c2\"" Jan 29 16:08:33.624005 containerd[1777]: time="2025-01-29T16:08:33.623970990Z" level=info msg="StartContainer for \"b0d778be223daea0d9a801fde15575b3928370512219d93c0f738757254d74c2\"" Jan 29 16:08:33.634252 containerd[1777]: time="2025-01-29T16:08:33.633488104Z" level=info msg="CreateContainer within sandbox \"e60f45aa2615ba426a0c708a50ed06d393c38ba667e64dfe2540621c67520942\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8ae2f4a091153fb08bd3e235f35ef09e38c7247637d0af4351b137a66da05cf\"" Jan 29 16:08:33.634252 containerd[1777]: time="2025-01-29T16:08:33.633991104Z" level=info msg="StartContainer for \"d8ae2f4a091153fb08bd3e235f35ef09e38c7247637d0af4351b137a66da05cf\"" Jan 29 16:08:33.639283 containerd[1777]: time="2025-01-29T16:08:33.639258621Z" level=info msg="CreateContainer within sandbox \"e8004b319ed6f73d1b155e9eebdfeda9e33fed8835efa3b2e4633a62c72496bc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29d121c80c9c1133b7457ed7c75a4ec45b0b53074919b89e27cd3a6e7516221a\"" Jan 29 16:08:33.640886 containerd[1777]: time="2025-01-29T16:08:33.640856860Z" level=info msg="StartContainer for \"29d121c80c9c1133b7457ed7c75a4ec45b0b53074919b89e27cd3a6e7516221a\"" Jan 29 16:08:33.642763 kubelet[2951]: E0129 16:08:33.642738 2951 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:08:33.650212 systemd[1]: Started cri-containerd-b0d778be223daea0d9a801fde15575b3928370512219d93c0f738757254d74c2.scope - libcontainer container b0d778be223daea0d9a801fde15575b3928370512219d93c0f738757254d74c2. Jan 29 16:08:33.676097 systemd[1]: Started cri-containerd-d8ae2f4a091153fb08bd3e235f35ef09e38c7247637d0af4351b137a66da05cf.scope - libcontainer container d8ae2f4a091153fb08bd3e235f35ef09e38c7247637d0af4351b137a66da05cf. Jan 29 16:08:33.682312 systemd[1]: Started cri-containerd-29d121c80c9c1133b7457ed7c75a4ec45b0b53074919b89e27cd3a6e7516221a.scope - libcontainer container 29d121c80c9c1133b7457ed7c75a4ec45b0b53074919b89e27cd3a6e7516221a. Jan 29 16:08:33.717302 containerd[1777]: time="2025-01-29T16:08:33.716718255Z" level=info msg="StartContainer for \"b0d778be223daea0d9a801fde15575b3928370512219d93c0f738757254d74c2\" returns successfully" Jan 29 16:08:33.754057 containerd[1777]: time="2025-01-29T16:08:33.753784513Z" level=info msg="StartContainer for \"29d121c80c9c1133b7457ed7c75a4ec45b0b53074919b89e27cd3a6e7516221a\" returns successfully" Jan 29 16:08:33.754245 containerd[1777]: time="2025-01-29T16:08:33.754157353Z" level=info msg="StartContainer for \"d8ae2f4a091153fb08bd3e235f35ef09e38c7247637d0af4351b137a66da05cf\" returns successfully" Jan 29 16:08:34.699051 kubelet[2951]: I0129 16:08:34.699011 2951 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:36.084395 kubelet[2951]: E0129 16:08:36.084353 2951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.0.0-a-877fd59aac\" not found" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:36.233391 kubelet[2951]: I0129 16:08:36.232973 2951 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:36.511552 kubelet[2951]: I0129 16:08:36.511525 2951 apiserver.go:52] "Watching apiserver" Jan 29 16:08:36.524616 kubelet[2951]: I0129 16:08:36.524579 2951 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:08:36.553146 kubelet[2951]: E0129 16:08:36.553113 2951 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:36.814537 kubelet[2951]: E0129 16:08:36.814304 2951 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:38.337268 systemd[1]: Reload requested from client PID 3229 ('systemctl') (unit session-9.scope)... Jan 29 16:08:38.337281 systemd[1]: Reloading... Jan 29 16:08:38.432070 zram_generator::config[3276]: No configuration found. Jan 29 16:08:38.540379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:38.655750 systemd[1]: Reloading finished in 318 ms. Jan 29 16:08:38.674919 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:38.690104 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:08:38.690318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:38.690360 systemd[1]: kubelet.service: Consumed 939ms CPU time, 117.8M memory peak. Jan 29 16:08:38.696235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:38.987236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:38.998335 (kubelet)[3340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:08:39.036989 kubelet[3340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:08:39.036989 kubelet[3340]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:08:39.036989 kubelet[3340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:08:39.037945 kubelet[3340]: I0129 16:08:39.037874 3340 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:08:39.049149 kubelet[3340]: I0129 16:08:39.047474 3340 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:08:39.049149 kubelet[3340]: I0129 16:08:39.047504 3340 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:08:39.049149 kubelet[3340]: I0129 16:08:39.047708 3340 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:08:39.050974 kubelet[3340]: I0129 16:08:39.050930 3340 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:08:39.053626 kubelet[3340]: I0129 16:08:39.053599 3340 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:08:39.057140 kubelet[3340]: E0129 16:08:39.056973 3340 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:08:39.057140 kubelet[3340]: I0129 16:08:39.057005 3340 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:08:39.060174 kubelet[3340]: I0129 16:08:39.060147 3340 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:08:39.060258 kubelet[3340]: I0129 16:08:39.060251 3340 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:08:39.060364 kubelet[3340]: I0129 16:08:39.060337 3340 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:08:39.060506 kubelet[3340]: I0129 16:08:39.060359 3340 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.0.0-a-877fd59aac","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:08:39.060506 kubelet[3340]: I0129 16:08:39.060507 3340 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:08:39.060606 kubelet[3340]: I0129 16:08:39.060518 3340 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:08:39.060606 kubelet[3340]: I0129 16:08:39.060545 3340 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:08:39.060650 kubelet[3340]: I0129 16:08:39.060629 3340 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:08:39.060650 kubelet[3340]: I0129 16:08:39.060639 3340 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:08:39.060688 kubelet[3340]: I0129 16:08:39.060656 3340 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:08:39.060688 kubelet[3340]: I0129 16:08:39.060664 3340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:08:39.066353 kubelet[3340]: I0129 16:08:39.066109 3340 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:08:39.066715 kubelet[3340]: I0129 16:08:39.066613 3340 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:08:39.067904 kubelet[3340]: I0129 16:08:39.067003 3340 server.go:1269] "Started kubelet" Jan 29 16:08:39.071042 kubelet[3340]: I0129 16:08:39.070281 3340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:08:39.073411 kubelet[3340]: I0129 16:08:39.073384 3340 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:08:39.084646 kubelet[3340]: I0129 16:08:39.084626 3340 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:08:39.085860 kubelet[3340]: I0129 16:08:39.085822 3340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:08:39.086806 kubelet[3340]: I0129 16:08:39.074585 3340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:08:39.087288 kubelet[3340]: I0129 16:08:39.074096 3340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:08:39.088118 kubelet[3340]: I0129 16:08:39.088052 3340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:08:39.088118 kubelet[3340]: I0129 16:08:39.088089 3340 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:08:39.088118 kubelet[3340]: I0129 16:08:39.088108 3340 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:08:39.088218 kubelet[3340]: E0129 16:08:39.088148 3340 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:08:39.088241 kubelet[3340]: I0129 16:08:39.075879 3340 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:08:39.088719 kubelet[3340]: I0129 16:08:39.088401 3340 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:08:39.088719 kubelet[3340]: E0129 16:08:39.076005 3340 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230.0.0-a-877fd59aac\" not found" Jan 29 16:08:39.088719 kubelet[3340]: I0129 16:08:39.075892 3340 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:08:39.099706 kubelet[3340]: I0129 16:08:39.099681 3340 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:08:39.099871 kubelet[3340]: I0129 16:08:39.099843 3340 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:08:39.099871 kubelet[3340]: I0129 16:08:39.099854 3340 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:08:39.099978 kubelet[3340]: I0129 16:08:39.099930 3340 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:08:39.159898 kubelet[3340]: I0129 16:08:39.159869 3340 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:08:39.159898 kubelet[3340]: I0129 16:08:39.159889 3340 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:08:39.160049 kubelet[3340]: I0129 16:08:39.159925 3340 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:08:39.160111 kubelet[3340]: I0129 16:08:39.160091 3340 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:08:39.160146 kubelet[3340]: I0129 16:08:39.160109 3340 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:08:39.160146 kubelet[3340]: I0129 16:08:39.160126 3340 policy_none.go:49] "None policy: Start" Jan 29 16:08:39.160774 kubelet[3340]: I0129 16:08:39.160755 3340 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:08:39.160820 kubelet[3340]: I0129 16:08:39.160782 3340 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:08:39.160956 kubelet[3340]: I0129 16:08:39.160937 3340 state_mem.go:75] "Updated machine memory state" Jan 29 16:08:39.164830 kubelet[3340]: I0129 16:08:39.164806 3340 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:08:39.164975 kubelet[3340]: I0129 16:08:39.164956 3340 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:08:39.165015 kubelet[3340]: I0129 16:08:39.164973 3340 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:08:39.165471 kubelet[3340]: I0129 16:08:39.165450 3340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:08:39.198403 kubelet[3340]: W0129 16:08:39.198360 3340 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:08:39.205003 kubelet[3340]: W0129 16:08:39.204970 3340 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:08:39.205111 kubelet[3340]: W0129 16:08:39.205091 3340 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 16:08:39.268230 kubelet[3340]: I0129 16:08:39.267410 3340 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.281672 kubelet[3340]: I0129 16:08:39.281616 3340 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.281761 kubelet[3340]: I0129 16:08:39.281689 3340 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301461 kubelet[3340]: I0129 16:08:39.301250 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-k8s-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301461 kubelet[3340]: I0129 16:08:39.301276 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-kubeconfig\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301461 kubelet[3340]: I0129 16:08:39.301295 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbb163458df871aa73ed68032eff7d3f-ca-certs\") pod \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" (UID: \"cbb163458df871aa73ed68032eff7d3f\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301461 kubelet[3340]: I0129 16:08:39.301309 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbb163458df871aa73ed68032eff7d3f-k8s-certs\") pod \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" (UID: \"cbb163458df871aa73ed68032eff7d3f\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301461 kubelet[3340]: I0129 16:08:39.301327 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbb163458df871aa73ed68032eff7d3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.0.0-a-877fd59aac\" (UID: \"cbb163458df871aa73ed68032eff7d3f\") " pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301628 kubelet[3340]: I0129 16:08:39.301342 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-ca-certs\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301628 kubelet[3340]: I0129 16:08:39.301356 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301628 kubelet[3340]: I0129 16:08:39.301372 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/659727a4fc449c7b57fa8894a5de5d9b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.0.0-a-877fd59aac\" (UID: \"659727a4fc449c7b57fa8894a5de5d9b\") " pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.301628 kubelet[3340]: I0129 16:08:39.301390 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f75df63d9653d184d7b28a8585759f8b-kubeconfig\") pod \"kube-scheduler-ci-4230.0.0-a-877fd59aac\" (UID: \"f75df63d9653d184d7b28a8585759f8b\") " pod="kube-system/kube-scheduler-ci-4230.0.0-a-877fd59aac" Jan 29 16:08:39.359858 sudo[3369]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:08:39.360163 sudo[3369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:08:39.810019 sudo[3369]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:40.075844 kubelet[3340]: I0129 16:08:40.075733 3340 apiserver.go:52] "Watching apiserver" Jan 29 16:08:40.089058 kubelet[3340]: I0129 16:08:40.089014 3340 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:08:40.179478 kubelet[3340]: I0129 16:08:40.179405 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.0.0-a-877fd59aac" podStartSLOduration=1.179390633 podStartE2EDuration="1.179390633s" podCreationTimestamp="2025-01-29 16:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:08:40.179166153 +0000 UTC m=+1.176908258" watchObservedRunningTime="2025-01-29 16:08:40.179390633 +0000 UTC m=+1.177132698" Jan 29 16:08:40.219501 kubelet[3340]: I0129 16:08:40.219442 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.0.0-a-877fd59aac" podStartSLOduration=1.219425246 podStartE2EDuration="1.219425246s" podCreationTimestamp="2025-01-29 16:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:08:40.195583902 +0000 UTC m=+1.193325967" watchObservedRunningTime="2025-01-29 16:08:40.219425246 +0000 UTC m=+1.217167351" Jan 29 16:08:40.238330 kubelet[3340]: I0129 16:08:40.238270 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.0.0-a-877fd59aac" podStartSLOduration=1.238254434 podStartE2EDuration="1.238254434s" podCreationTimestamp="2025-01-29 16:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:08:40.219763646 +0000 UTC m=+1.217505751" watchObservedRunningTime="2025-01-29 16:08:40.238254434 +0000 UTC m=+1.235996499" Jan 29 16:08:41.238055 sudo[2346]: pam_unix(sudo:session): session closed for user root Jan 29 16:08:41.309237 sshd[2345]: Connection closed by 10.200.16.10 port 40428 Jan 29 16:08:41.309798 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Jan 29 16:08:41.313282 systemd[1]: sshd@6-10.200.20.42:22-10.200.16.10:40428.service: Deactivated successfully. Jan 29 16:08:41.315118 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:08:41.315289 systemd[1]: session-9.scope: Consumed 5.786s CPU time, 256.7M memory peak. Jan 29 16:08:41.316557 systemd-logind[1742]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:08:41.317618 systemd-logind[1742]: Removed session 9. Jan 29 16:08:42.589381 kubelet[3340]: I0129 16:08:42.589340 3340 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:08:42.590065 kubelet[3340]: I0129 16:08:42.589962 3340 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:08:42.590105 containerd[1777]: time="2025-01-29T16:08:42.589656837Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:08:43.580680 systemd[1]: Created slice kubepods-besteffort-podd4c3bb89_41d5_4d88_864d_e98afcff1892.slice - libcontainer container kubepods-besteffort-podd4c3bb89_41d5_4d88_864d_e98afcff1892.slice. Jan 29 16:08:43.599335 systemd[1]: Created slice kubepods-burstable-pode3b7a5eb_5088_44fc_84ed_23624ab11d21.slice - libcontainer container kubepods-burstable-pode3b7a5eb_5088_44fc_84ed_23624ab11d21.slice. Jan 29 16:08:43.625877 kubelet[3340]: I0129 16:08:43.625834 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-bpf-maps\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.625877 kubelet[3340]: I0129 16:08:43.625869 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-cgroup\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626227 kubelet[3340]: I0129 16:08:43.625889 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-etc-cni-netd\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626227 kubelet[3340]: I0129 16:08:43.625905 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-kernel\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626227 kubelet[3340]: I0129 16:08:43.625919 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4c3bb89-41d5-4d88-864d-e98afcff1892-kube-proxy\") pod \"kube-proxy-r9ch8\" (UID: \"d4c3bb89-41d5-4d88-864d-e98afcff1892\") " pod="kube-system/kube-proxy-r9ch8" Jan 29 16:08:43.626227 kubelet[3340]: I0129 16:08:43.625934 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4c3bb89-41d5-4d88-864d-e98afcff1892-lib-modules\") pod \"kube-proxy-r9ch8\" (UID: \"d4c3bb89-41d5-4d88-864d-e98afcff1892\") " pod="kube-system/kube-proxy-r9ch8" Jan 29 16:08:43.626227 kubelet[3340]: I0129 16:08:43.625947 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cni-path\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626227 kubelet[3340]: I0129 16:08:43.625962 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-lib-modules\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626355 kubelet[3340]: I0129 16:08:43.625977 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-net\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626355 kubelet[3340]: I0129 16:08:43.625995 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3b7a5eb-5088-44fc-84ed-23624ab11d21-clustermesh-secrets\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626355 kubelet[3340]: I0129 16:08:43.626009 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hubble-tls\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626355 kubelet[3340]: I0129 16:08:43.626022 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjbk\" (UniqueName: \"kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-kube-api-access-qdjbk\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626355 kubelet[3340]: I0129 16:08:43.626052 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4c3bb89-41d5-4d88-864d-e98afcff1892-xtables-lock\") pod \"kube-proxy-r9ch8\" (UID: \"d4c3bb89-41d5-4d88-864d-e98afcff1892\") " pod="kube-system/kube-proxy-r9ch8" Jan 29 16:08:43.626355 kubelet[3340]: I0129 16:08:43.626066 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hostproc\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626586 kubelet[3340]: I0129 16:08:43.626079 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-config-path\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626586 kubelet[3340]: I0129 16:08:43.626103 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sqn9\" (UniqueName: \"kubernetes.io/projected/d4c3bb89-41d5-4d88-864d-e98afcff1892-kube-api-access-6sqn9\") pod \"kube-proxy-r9ch8\" (UID: \"d4c3bb89-41d5-4d88-864d-e98afcff1892\") " pod="kube-system/kube-proxy-r9ch8" Jan 29 16:08:43.626586 kubelet[3340]: I0129 16:08:43.626118 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-xtables-lock\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.626586 kubelet[3340]: I0129 16:08:43.626136 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-run\") pod \"cilium-qf2b8\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " pod="kube-system/cilium-qf2b8" Jan 29 16:08:43.631816 systemd[1]: Created slice kubepods-besteffort-podd5930b9f_c877_484f_b461_3587d08ef908.slice - libcontainer container kubepods-besteffort-podd5930b9f_c877_484f_b461_3587d08ef908.slice. Jan 29 16:08:43.728510 kubelet[3340]: I0129 16:08:43.726412 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkn7r\" (UniqueName: \"kubernetes.io/projected/d5930b9f-c877-484f-b461-3587d08ef908-kube-api-access-gkn7r\") pod \"cilium-operator-5d85765b45-fjdmd\" (UID: \"d5930b9f-c877-484f-b461-3587d08ef908\") " pod="kube-system/cilium-operator-5d85765b45-fjdmd" Jan 29 16:08:43.728510 kubelet[3340]: I0129 16:08:43.726743 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5930b9f-c877-484f-b461-3587d08ef908-cilium-config-path\") pod \"cilium-operator-5d85765b45-fjdmd\" (UID: \"d5930b9f-c877-484f-b461-3587d08ef908\") " pod="kube-system/cilium-operator-5d85765b45-fjdmd" Jan 29 16:08:43.890838 containerd[1777]: time="2025-01-29T16:08:43.890739975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9ch8,Uid:d4c3bb89-41d5-4d88-864d-e98afcff1892,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:43.902464 containerd[1777]: time="2025-01-29T16:08:43.902424647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qf2b8,Uid:e3b7a5eb-5088-44fc-84ed-23624ab11d21,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:43.930049 containerd[1777]: time="2025-01-29T16:08:43.929949309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:43.930159 containerd[1777]: time="2025-01-29T16:08:43.930082149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:43.930159 containerd[1777]: time="2025-01-29T16:08:43.930110149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:43.930265 containerd[1777]: time="2025-01-29T16:08:43.930232869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:43.936884 containerd[1777]: time="2025-01-29T16:08:43.936805904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fjdmd,Uid:d5930b9f-c877-484f-b461-3587d08ef908,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:43.946419 systemd[1]: Started cri-containerd-f713a155496004256365954d677b0d39a51c3ecedd51828da157aad5959cfca4.scope - libcontainer container f713a155496004256365954d677b0d39a51c3ecedd51828da157aad5959cfca4. Jan 29 16:08:43.953111 containerd[1777]: time="2025-01-29T16:08:43.952596534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:43.953111 containerd[1777]: time="2025-01-29T16:08:43.952644814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:43.953111 containerd[1777]: time="2025-01-29T16:08:43.952658534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:43.953821 containerd[1777]: time="2025-01-29T16:08:43.953501813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:43.970324 systemd[1]: Started cri-containerd-9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e.scope - libcontainer container 9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e. Jan 29 16:08:43.975992 containerd[1777]: time="2025-01-29T16:08:43.975725839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9ch8,Uid:d4c3bb89-41d5-4d88-864d-e98afcff1892,Namespace:kube-system,Attempt:0,} returns sandbox id \"f713a155496004256365954d677b0d39a51c3ecedd51828da157aad5959cfca4\"" Jan 29 16:08:43.979848 containerd[1777]: time="2025-01-29T16:08:43.979794876Z" level=info msg="CreateContainer within sandbox \"f713a155496004256365954d677b0d39a51c3ecedd51828da157aad5959cfca4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:08:43.999763 containerd[1777]: time="2025-01-29T16:08:43.999708103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qf2b8,Uid:e3b7a5eb-5088-44fc-84ed-23624ab11d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\"" Jan 29 16:08:44.004024 containerd[1777]: time="2025-01-29T16:08:44.003918460Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:08:44.004826 containerd[1777]: time="2025-01-29T16:08:44.004728059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:08:44.004826 containerd[1777]: time="2025-01-29T16:08:44.004794779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:08:44.004826 containerd[1777]: time="2025-01-29T16:08:44.004809299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:44.005005 containerd[1777]: time="2025-01-29T16:08:44.004875739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:08:44.020176 systemd[1]: Started cri-containerd-ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db.scope - libcontainer container ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db. Jan 29 16:08:44.025359 containerd[1777]: time="2025-01-29T16:08:44.025224046Z" level=info msg="CreateContainer within sandbox \"f713a155496004256365954d677b0d39a51c3ecedd51828da157aad5959cfca4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f38a505e09c6fc30df8e7187d7ceb857e302c941a58a8fbc03569896d3fc4ab0\"" Jan 29 16:08:44.025806 containerd[1777]: time="2025-01-29T16:08:44.025688806Z" level=info msg="StartContainer for \"f38a505e09c6fc30df8e7187d7ceb857e302c941a58a8fbc03569896d3fc4ab0\"" Jan 29 16:08:44.055194 systemd[1]: Started cri-containerd-f38a505e09c6fc30df8e7187d7ceb857e302c941a58a8fbc03569896d3fc4ab0.scope - libcontainer container f38a505e09c6fc30df8e7187d7ceb857e302c941a58a8fbc03569896d3fc4ab0. Jan 29 16:08:44.060744 containerd[1777]: time="2025-01-29T16:08:44.060713182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fjdmd,Uid:d5930b9f-c877-484f-b461-3587d08ef908,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db\"" Jan 29 16:08:44.085330 containerd[1777]: time="2025-01-29T16:08:44.085281846Z" level=info msg="StartContainer for \"f38a505e09c6fc30df8e7187d7ceb857e302c941a58a8fbc03569896d3fc4ab0\" returns successfully" Jan 29 16:08:46.520384 kubelet[3340]: I0129 16:08:46.520279 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r9ch8" podStartSLOduration=3.520252501 podStartE2EDuration="3.520252501s" podCreationTimestamp="2025-01-29 16:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:08:44.175022987 +0000 UTC m=+5.172765092" watchObservedRunningTime="2025-01-29 16:08:46.520252501 +0000 UTC m=+7.517994606" Jan 29 16:08:49.482439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080961321.mount: Deactivated successfully. Jan 29 16:08:51.616198 containerd[1777]: time="2025-01-29T16:08:51.616135367Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:51.620697 containerd[1777]: time="2025-01-29T16:08:51.620651963Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 16:08:51.624832 containerd[1777]: time="2025-01-29T16:08:51.624789240Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:51.626308 containerd[1777]: time="2025-01-29T16:08:51.626281599Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.622330539s" Jan 29 16:08:51.626472 containerd[1777]: time="2025-01-29T16:08:51.626373079Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 16:08:51.628886 containerd[1777]: time="2025-01-29T16:08:51.628800597Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:08:51.629057 containerd[1777]: time="2025-01-29T16:08:51.628857477Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:08:51.673563 containerd[1777]: time="2025-01-29T16:08:51.673492162Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\"" Jan 29 16:08:51.674069 containerd[1777]: time="2025-01-29T16:08:51.673840042Z" level=info msg="StartContainer for \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\"" Jan 29 16:08:51.694548 systemd[1]: run-containerd-runc-k8s.io-08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138-runc.aSgsmb.mount: Deactivated successfully. Jan 29 16:08:51.706174 systemd[1]: Started cri-containerd-08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138.scope - libcontainer container 08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138. Jan 29 16:08:51.730343 containerd[1777]: time="2025-01-29T16:08:51.730306758Z" level=info msg="StartContainer for \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\" returns successfully" Jan 29 16:08:51.734415 systemd[1]: cri-containerd-08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138.scope: Deactivated successfully. Jan 29 16:08:52.661784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138-rootfs.mount: Deactivated successfully. Jan 29 16:08:53.049515 containerd[1777]: time="2025-01-29T16:08:53.049438050Z" level=info msg="shim disconnected" id=08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138 namespace=k8s.io Jan 29 16:08:53.049515 containerd[1777]: time="2025-01-29T16:08:53.049482210Z" level=warning msg="cleaning up after shim disconnected" id=08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138 namespace=k8s.io Jan 29 16:08:53.049515 containerd[1777]: time="2025-01-29T16:08:53.049490890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:08:53.059845 containerd[1777]: time="2025-01-29T16:08:53.059114283Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:08:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:08:53.180293 containerd[1777]: time="2025-01-29T16:08:53.179835549Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:08:53.201505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787600099.mount: Deactivated successfully. Jan 29 16:08:53.216671 containerd[1777]: time="2025-01-29T16:08:53.216579760Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\"" Jan 29 16:08:53.218004 containerd[1777]: time="2025-01-29T16:08:53.217254080Z" level=info msg="StartContainer for \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\"" Jan 29 16:08:53.242171 systemd[1]: Started cri-containerd-8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b.scope - libcontainer container 8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b. Jan 29 16:08:53.268637 containerd[1777]: time="2025-01-29T16:08:53.268350360Z" level=info msg="StartContainer for \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\" returns successfully" Jan 29 16:08:53.276485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:08:53.276953 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:08:53.277389 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:08:53.285413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:08:53.288090 systemd[1]: cri-containerd-8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b.scope: Deactivated successfully. Jan 29 16:08:53.302076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:08:53.319115 containerd[1777]: time="2025-01-29T16:08:53.318998440Z" level=info msg="shim disconnected" id=8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b namespace=k8s.io Jan 29 16:08:53.319115 containerd[1777]: time="2025-01-29T16:08:53.319061880Z" level=warning msg="cleaning up after shim disconnected" id=8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b namespace=k8s.io Jan 29 16:08:53.319115 containerd[1777]: time="2025-01-29T16:08:53.319070720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:08:53.661619 systemd[1]: run-containerd-runc-k8s.io-8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b-runc.R3Fekp.mount: Deactivated successfully. Jan 29 16:08:53.661736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b-rootfs.mount: Deactivated successfully. Jan 29 16:08:54.180231 containerd[1777]: time="2025-01-29T16:08:54.180185930Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:08:54.218442 containerd[1777]: time="2025-01-29T16:08:54.218390460Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\"" Jan 29 16:08:54.221634 containerd[1777]: time="2025-01-29T16:08:54.221301898Z" level=info msg="StartContainer for \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\"" Jan 29 16:08:54.272180 systemd[1]: Started cri-containerd-6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e.scope - libcontainer container 6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e. Jan 29 16:08:54.303161 systemd[1]: cri-containerd-6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e.scope: Deactivated successfully. Jan 29 16:08:54.307689 containerd[1777]: time="2025-01-29T16:08:54.307578271Z" level=info msg="StartContainer for \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\" returns successfully" Jan 29 16:08:54.339897 containerd[1777]: time="2025-01-29T16:08:54.339814365Z" level=info msg="shim disconnected" id=6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e namespace=k8s.io Jan 29 16:08:54.340301 containerd[1777]: time="2025-01-29T16:08:54.339971725Z" level=warning msg="cleaning up after shim disconnected" id=6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e namespace=k8s.io Jan 29 16:08:54.340301 containerd[1777]: time="2025-01-29T16:08:54.339985565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:08:54.662017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e-rootfs.mount: Deactivated successfully. Jan 29 16:08:54.721412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198303593.mount: Deactivated successfully. Jan 29 16:08:55.141212 containerd[1777]: time="2025-01-29T16:08:55.141164501Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:55.144247 containerd[1777]: time="2025-01-29T16:08:55.144191219Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 16:08:55.150377 containerd[1777]: time="2025-01-29T16:08:55.150343534Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:08:55.151763 containerd[1777]: time="2025-01-29T16:08:55.151727293Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.522894136s" Jan 29 16:08:55.151845 containerd[1777]: time="2025-01-29T16:08:55.151765253Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 16:08:55.155017 containerd[1777]: time="2025-01-29T16:08:55.154970651Z" level=info msg="CreateContainer within sandbox \"ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:08:55.184456 containerd[1777]: time="2025-01-29T16:08:55.184412748Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:08:55.192991 containerd[1777]: time="2025-01-29T16:08:55.191607222Z" level=info msg="CreateContainer within sandbox \"ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\"" Jan 29 16:08:55.193244 containerd[1777]: time="2025-01-29T16:08:55.193223781Z" level=info msg="StartContainer for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\"" Jan 29 16:08:55.220187 systemd[1]: Started cri-containerd-a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05.scope - libcontainer container a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05. Jan 29 16:08:55.223899 containerd[1777]: time="2025-01-29T16:08:55.223847477Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\"" Jan 29 16:08:55.226308 containerd[1777]: time="2025-01-29T16:08:55.226207715Z" level=info msg="StartContainer for \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\"" Jan 29 16:08:55.256338 systemd[1]: Started cri-containerd-5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b.scope - libcontainer container 5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b. Jan 29 16:08:55.261154 containerd[1777]: time="2025-01-29T16:08:55.260583768Z" level=info msg="StartContainer for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" returns successfully" Jan 29 16:08:55.286311 systemd[1]: cri-containerd-5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b.scope: Deactivated successfully. Jan 29 16:08:55.295162 containerd[1777]: time="2025-01-29T16:08:55.295025221Z" level=info msg="StartContainer for \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\" returns successfully" Jan 29 16:08:55.654007 containerd[1777]: time="2025-01-29T16:08:55.651965407Z" level=info msg="shim disconnected" id=5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b namespace=k8s.io Jan 29 16:08:55.654007 containerd[1777]: time="2025-01-29T16:08:55.652018166Z" level=warning msg="cleaning up after shim disconnected" id=5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b namespace=k8s.io Jan 29 16:08:55.654460 containerd[1777]: time="2025-01-29T16:08:55.652026006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:08:56.194373 containerd[1777]: time="2025-01-29T16:08:56.194295836Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:08:56.205660 kubelet[3340]: I0129 16:08:56.205344 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fjdmd" podStartSLOduration=2.114556556 podStartE2EDuration="13.205329227s" podCreationTimestamp="2025-01-29 16:08:43 +0000 UTC" firstStartedPulling="2025-01-29 16:08:44.062114341 +0000 UTC m=+5.059856446" lastFinishedPulling="2025-01-29 16:08:55.152887012 +0000 UTC m=+16.150629117" observedRunningTime="2025-01-29 16:08:56.204470907 +0000 UTC m=+17.202213012" watchObservedRunningTime="2025-01-29 16:08:56.205329227 +0000 UTC m=+17.203071332" Jan 29 16:08:56.225739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983604976.mount: Deactivated successfully. Jan 29 16:08:56.236165 containerd[1777]: time="2025-01-29T16:08:56.236122961Z" level=info msg="CreateContainer within sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\"" Jan 29 16:08:56.236823 containerd[1777]: time="2025-01-29T16:08:56.236798841Z" level=info msg="StartContainer for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\"" Jan 29 16:08:56.262259 systemd[1]: Started cri-containerd-ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad.scope - libcontainer container ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad. Jan 29 16:08:56.292094 containerd[1777]: time="2025-01-29T16:08:56.292003635Z" level=info msg="StartContainer for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" returns successfully" Jan 29 16:08:56.403054 kubelet[3340]: I0129 16:08:56.400593 3340 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:08:56.444273 systemd[1]: Created slice kubepods-burstable-podf9b4d3c6_e849_48f4_9a1f_5b6f89f0fa16.slice - libcontainer container kubepods-burstable-podf9b4d3c6_e849_48f4_9a1f_5b6f89f0fa16.slice. Jan 29 16:08:56.451644 systemd[1]: Created slice kubepods-burstable-pod63f2409b_d5d7_4382_9b7d_8261d81d66b9.slice - libcontainer container kubepods-burstable-pod63f2409b_d5d7_4382_9b7d_8261d81d66b9.slice. Jan 29 16:08:56.500842 kubelet[3340]: I0129 16:08:56.500606 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63f2409b-d5d7-4382-9b7d-8261d81d66b9-config-volume\") pod \"coredns-6f6b679f8f-ntkg8\" (UID: \"63f2409b-d5d7-4382-9b7d-8261d81d66b9\") " pod="kube-system/coredns-6f6b679f8f-ntkg8" Jan 29 16:08:56.500842 kubelet[3340]: I0129 16:08:56.500644 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gn4\" (UniqueName: \"kubernetes.io/projected/63f2409b-d5d7-4382-9b7d-8261d81d66b9-kube-api-access-f6gn4\") pod \"coredns-6f6b679f8f-ntkg8\" (UID: \"63f2409b-d5d7-4382-9b7d-8261d81d66b9\") " pod="kube-system/coredns-6f6b679f8f-ntkg8" Jan 29 16:08:56.500842 kubelet[3340]: I0129 16:08:56.500666 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcg9g\" (UniqueName: \"kubernetes.io/projected/f9b4d3c6-e849-48f4-9a1f-5b6f89f0fa16-kube-api-access-mcg9g\") pod \"coredns-6f6b679f8f-5rxlh\" (UID: \"f9b4d3c6-e849-48f4-9a1f-5b6f89f0fa16\") " pod="kube-system/coredns-6f6b679f8f-5rxlh" Jan 29 16:08:56.500842 kubelet[3340]: I0129 16:08:56.500685 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9b4d3c6-e849-48f4-9a1f-5b6f89f0fa16-config-volume\") pod \"coredns-6f6b679f8f-5rxlh\" (UID: \"f9b4d3c6-e849-48f4-9a1f-5b6f89f0fa16\") " pod="kube-system/coredns-6f6b679f8f-5rxlh" Jan 29 16:08:56.750021 containerd[1777]: time="2025-01-29T16:08:56.749281175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rxlh,Uid:f9b4d3c6-e849-48f4-9a1f-5b6f89f0fa16,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:56.756130 containerd[1777]: time="2025-01-29T16:08:56.755956929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ntkg8,Uid:63f2409b-d5d7-4382-9b7d-8261d81d66b9,Namespace:kube-system,Attempt:0,}" Jan 29 16:08:57.219769 kubelet[3340]: I0129 16:08:57.217675 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qf2b8" podStartSLOduration=6.592134528 podStartE2EDuration="14.217657385s" podCreationTimestamp="2025-01-29 16:08:43 +0000 UTC" firstStartedPulling="2025-01-29 16:08:44.001822861 +0000 UTC m=+4.999564966" lastFinishedPulling="2025-01-29 16:08:51.627345718 +0000 UTC m=+12.625087823" observedRunningTime="2025-01-29 16:08:57.217610585 +0000 UTC m=+18.215352650" watchObservedRunningTime="2025-01-29 16:08:57.217657385 +0000 UTC m=+18.215399450" Jan 29 16:08:59.286780 systemd-networkd[1485]: cilium_host: Link UP Jan 29 16:08:59.287338 systemd-networkd[1485]: cilium_net: Link UP Jan 29 16:08:59.287930 systemd-networkd[1485]: cilium_net: Gained carrier Jan 29 16:08:59.288602 systemd-networkd[1485]: cilium_host: Gained carrier Jan 29 16:08:59.445121 systemd-networkd[1485]: cilium_vxlan: Link UP Jan 29 16:08:59.445130 systemd-networkd[1485]: cilium_vxlan: Gained carrier Jan 29 16:08:59.448573 systemd-networkd[1485]: cilium_host: Gained IPv6LL Jan 29 16:08:59.773054 kernel: NET: Registered PF_ALG protocol family Jan 29 16:08:59.832137 systemd-networkd[1485]: cilium_net: Gained IPv6LL Jan 29 16:09:00.527856 systemd-networkd[1485]: lxc_health: Link UP Jan 29 16:09:00.535890 systemd-networkd[1485]: lxc_health: Gained carrier Jan 29 16:09:00.845089 kernel: eth0: renamed from tmpf980a Jan 29 16:09:00.854788 systemd-networkd[1485]: lxc5e7f80498fa1: Link UP Jan 29 16:09:00.864053 kernel: eth0: renamed from tmp20ac2 Jan 29 16:09:00.871612 systemd-networkd[1485]: lxc03a162690e5d: Link UP Jan 29 16:09:00.873848 systemd-networkd[1485]: lxc5e7f80498fa1: Gained carrier Jan 29 16:09:00.874074 systemd-networkd[1485]: cilium_vxlan: Gained IPv6LL Jan 29 16:09:00.874251 systemd-networkd[1485]: lxc03a162690e5d: Gained carrier Jan 29 16:09:01.944226 systemd-networkd[1485]: lxc_health: Gained IPv6LL Jan 29 16:09:02.073145 systemd-networkd[1485]: lxc5e7f80498fa1: Gained IPv6LL Jan 29 16:09:02.265630 systemd-networkd[1485]: lxc03a162690e5d: Gained IPv6LL Jan 29 16:09:04.332008 containerd[1777]: time="2025-01-29T16:09:04.331922715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:04.333098 containerd[1777]: time="2025-01-29T16:09:04.332223075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:04.333098 containerd[1777]: time="2025-01-29T16:09:04.332347675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:04.333098 containerd[1777]: time="2025-01-29T16:09:04.332818755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:04.359103 containerd[1777]: time="2025-01-29T16:09:04.357594494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:09:04.359103 containerd[1777]: time="2025-01-29T16:09:04.357746094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:09:04.359103 containerd[1777]: time="2025-01-29T16:09:04.357765814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:04.359103 containerd[1777]: time="2025-01-29T16:09:04.358379333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:09:04.374212 systemd[1]: Started cri-containerd-20ac2388fa29b89a8c79811199dc23d8d8cdbd97cb46dbd187bcf55ea7db8c47.scope - libcontainer container 20ac2388fa29b89a8c79811199dc23d8d8cdbd97cb46dbd187bcf55ea7db8c47. Jan 29 16:09:04.378267 systemd[1]: Started cri-containerd-f980a24baa3e0c0082c905d4d944abbbc1fc80a85b3aa4cb8c7bc8fd4d4da68e.scope - libcontainer container f980a24baa3e0c0082c905d4d944abbbc1fc80a85b3aa4cb8c7bc8fd4d4da68e. Jan 29 16:09:04.413536 containerd[1777]: time="2025-01-29T16:09:04.413482848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5rxlh,Uid:f9b4d3c6-e849-48f4-9a1f-5b6f89f0fa16,Namespace:kube-system,Attempt:0,} returns sandbox id \"f980a24baa3e0c0082c905d4d944abbbc1fc80a85b3aa4cb8c7bc8fd4d4da68e\"" Jan 29 16:09:04.425229 containerd[1777]: time="2025-01-29T16:09:04.423777039Z" level=info msg="CreateContainer within sandbox \"f980a24baa3e0c0082c905d4d944abbbc1fc80a85b3aa4cb8c7bc8fd4d4da68e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:09:04.428826 containerd[1777]: time="2025-01-29T16:09:04.428796955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ntkg8,Uid:63f2409b-d5d7-4382-9b7d-8261d81d66b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"20ac2388fa29b89a8c79811199dc23d8d8cdbd97cb46dbd187bcf55ea7db8c47\"" Jan 29 16:09:04.432297 containerd[1777]: time="2025-01-29T16:09:04.432263592Z" level=info msg="CreateContainer within sandbox \"20ac2388fa29b89a8c79811199dc23d8d8cdbd97cb46dbd187bcf55ea7db8c47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:09:04.478081 containerd[1777]: time="2025-01-29T16:09:04.477584955Z" level=info msg="CreateContainer within sandbox \"f980a24baa3e0c0082c905d4d944abbbc1fc80a85b3aa4cb8c7bc8fd4d4da68e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72dff9074452acbf667bf0292efcb276fb7c8c25db002d02100b9d6543c1f8bb\"" Jan 29 16:09:04.480603 containerd[1777]: time="2025-01-29T16:09:04.478237514Z" level=info msg="StartContainer for \"72dff9074452acbf667bf0292efcb276fb7c8c25db002d02100b9d6543c1f8bb\"" Jan 29 16:09:04.489526 containerd[1777]: time="2025-01-29T16:09:04.489436305Z" level=info msg="CreateContainer within sandbox \"20ac2388fa29b89a8c79811199dc23d8d8cdbd97cb46dbd187bcf55ea7db8c47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"954072402b97baeb508e64b34feafb9ae7c8ed0445b7883f6e60255ddbc62677\"" Jan 29 16:09:04.492024 containerd[1777]: time="2025-01-29T16:09:04.490912664Z" level=info msg="StartContainer for \"954072402b97baeb508e64b34feafb9ae7c8ed0445b7883f6e60255ddbc62677\"" Jan 29 16:09:04.515164 systemd[1]: Started cri-containerd-72dff9074452acbf667bf0292efcb276fb7c8c25db002d02100b9d6543c1f8bb.scope - libcontainer container 72dff9074452acbf667bf0292efcb276fb7c8c25db002d02100b9d6543c1f8bb. Jan 29 16:09:04.528158 systemd[1]: Started cri-containerd-954072402b97baeb508e64b34feafb9ae7c8ed0445b7883f6e60255ddbc62677.scope - libcontainer container 954072402b97baeb508e64b34feafb9ae7c8ed0445b7883f6e60255ddbc62677. Jan 29 16:09:04.555946 containerd[1777]: time="2025-01-29T16:09:04.555490690Z" level=info msg="StartContainer for \"72dff9074452acbf667bf0292efcb276fb7c8c25db002d02100b9d6543c1f8bb\" returns successfully" Jan 29 16:09:04.561983 containerd[1777]: time="2025-01-29T16:09:04.561795765Z" level=info msg="StartContainer for \"954072402b97baeb508e64b34feafb9ae7c8ed0445b7883f6e60255ddbc62677\" returns successfully" Jan 29 16:09:05.000139 kubelet[3340]: I0129 16:09:04.999691 3340 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:09:05.226691 kubelet[3340]: I0129 16:09:05.226568 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5rxlh" podStartSLOduration=22.226553173 podStartE2EDuration="22.226553173s" podCreationTimestamp="2025-01-29 16:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:05.225343055 +0000 UTC m=+26.223085160" watchObservedRunningTime="2025-01-29 16:09:05.226553173 +0000 UTC m=+26.224295278" Jan 29 16:10:04.473271 systemd[1]: Started sshd@7-10.200.20.42:22-10.200.16.10:38528.service - OpenSSH per-connection server daemon (10.200.16.10:38528). Jan 29 16:10:04.915390 sshd[4738]: Accepted publickey for core from 10.200.16.10 port 38528 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:04.916715 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:04.921163 systemd-logind[1742]: New session 10 of user core. Jan 29 16:10:04.924177 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:10:05.343521 sshd[4740]: Connection closed by 10.200.16.10 port 38528 Jan 29 16:10:05.344080 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:05.347328 systemd-logind[1742]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:10:05.348101 systemd[1]: sshd@7-10.200.20.42:22-10.200.16.10:38528.service: Deactivated successfully. Jan 29 16:10:05.349886 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:10:05.350911 systemd-logind[1742]: Removed session 10. Jan 29 16:10:10.425523 systemd[1]: Started sshd@8-10.200.20.42:22-10.200.16.10:47272.service - OpenSSH per-connection server daemon (10.200.16.10:47272). Jan 29 16:10:10.856750 sshd[4753]: Accepted publickey for core from 10.200.16.10 port 47272 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:10.857957 sshd-session[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:10.862247 systemd-logind[1742]: New session 11 of user core. Jan 29 16:10:10.870240 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:10:11.226048 sshd[4755]: Connection closed by 10.200.16.10 port 47272 Jan 29 16:10:11.226561 sshd-session[4753]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:11.229794 systemd-logind[1742]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:10:11.230480 systemd[1]: sshd@8-10.200.20.42:22-10.200.16.10:47272.service: Deactivated successfully. Jan 29 16:10:11.232400 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:10:11.233798 systemd-logind[1742]: Removed session 11. Jan 29 16:10:16.312529 systemd[1]: Started sshd@9-10.200.20.42:22-10.200.16.10:49948.service - OpenSSH per-connection server daemon (10.200.16.10:49948). Jan 29 16:10:16.749643 sshd[4770]: Accepted publickey for core from 10.200.16.10 port 49948 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:16.752268 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:16.756259 systemd-logind[1742]: New session 12 of user core. Jan 29 16:10:16.761174 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:10:17.125495 sshd[4772]: Connection closed by 10.200.16.10 port 49948 Jan 29 16:10:17.124574 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:17.127124 systemd[1]: sshd@9-10.200.20.42:22-10.200.16.10:49948.service: Deactivated successfully. Jan 29 16:10:17.128738 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:10:17.130645 systemd-logind[1742]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:10:17.131499 systemd-logind[1742]: Removed session 12. Jan 29 16:10:22.205187 systemd[1]: Started sshd@10-10.200.20.42:22-10.200.16.10:49960.service - OpenSSH per-connection server daemon (10.200.16.10:49960). Jan 29 16:10:22.660881 sshd[4785]: Accepted publickey for core from 10.200.16.10 port 49960 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:22.662125 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:22.666716 systemd-logind[1742]: New session 13 of user core. Jan 29 16:10:22.677232 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:10:23.042133 sshd[4787]: Connection closed by 10.200.16.10 port 49960 Jan 29 16:10:23.043244 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:23.045929 systemd[1]: sshd@10-10.200.20.42:22-10.200.16.10:49960.service: Deactivated successfully. Jan 29 16:10:23.047559 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:10:23.048924 systemd-logind[1742]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:10:23.049963 systemd-logind[1742]: Removed session 13. Jan 29 16:10:23.129246 systemd[1]: Started sshd@11-10.200.20.42:22-10.200.16.10:49974.service - OpenSSH per-connection server daemon (10.200.16.10:49974). Jan 29 16:10:23.572741 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 49974 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:23.573947 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:23.579121 systemd-logind[1742]: New session 14 of user core. Jan 29 16:10:23.585182 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:10:23.989159 sshd[4802]: Connection closed by 10.200.16.10 port 49974 Jan 29 16:10:23.989699 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:23.992393 systemd[1]: sshd@11-10.200.20.42:22-10.200.16.10:49974.service: Deactivated successfully. Jan 29 16:10:23.993993 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:10:23.996012 systemd-logind[1742]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:10:23.997267 systemd-logind[1742]: Removed session 14. Jan 29 16:10:24.070264 systemd[1]: Started sshd@12-10.200.20.42:22-10.200.16.10:49990.service - OpenSSH per-connection server daemon (10.200.16.10:49990). Jan 29 16:10:24.505115 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 49990 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:24.506320 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:24.511098 systemd-logind[1742]: New session 15 of user core. Jan 29 16:10:24.516207 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:10:24.879619 sshd[4813]: Connection closed by 10.200.16.10 port 49990 Jan 29 16:10:24.880376 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:24.883216 systemd-logind[1742]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:10:24.883374 systemd[1]: sshd@12-10.200.20.42:22-10.200.16.10:49990.service: Deactivated successfully. Jan 29 16:10:24.884924 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:10:24.887203 systemd-logind[1742]: Removed session 15. Jan 29 16:10:29.959274 systemd[1]: Started sshd@13-10.200.20.42:22-10.200.16.10:54284.service - OpenSSH per-connection server daemon (10.200.16.10:54284). Jan 29 16:10:30.376296 sshd[4824]: Accepted publickey for core from 10.200.16.10 port 54284 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:30.377384 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:30.381203 systemd-logind[1742]: New session 16 of user core. Jan 29 16:10:30.385181 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:10:30.740397 sshd[4826]: Connection closed by 10.200.16.10 port 54284 Jan 29 16:10:30.740919 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:30.744063 systemd[1]: sshd@13-10.200.20.42:22-10.200.16.10:54284.service: Deactivated successfully. Jan 29 16:10:30.745645 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:10:30.746413 systemd-logind[1742]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:10:30.747766 systemd-logind[1742]: Removed session 16. Jan 29 16:10:35.816867 systemd[1]: Started sshd@14-10.200.20.42:22-10.200.16.10:50732.service - OpenSSH per-connection server daemon (10.200.16.10:50732). Jan 29 16:10:36.241153 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 50732 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:36.242394 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:36.246815 systemd-logind[1742]: New session 17 of user core. Jan 29 16:10:36.255172 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:10:36.605252 sshd[4840]: Connection closed by 10.200.16.10 port 50732 Jan 29 16:10:36.605690 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:36.609111 systemd[1]: sshd@14-10.200.20.42:22-10.200.16.10:50732.service: Deactivated successfully. Jan 29 16:10:36.610799 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:10:36.611661 systemd-logind[1742]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:10:36.612618 systemd-logind[1742]: Removed session 17. Jan 29 16:10:36.686892 systemd[1]: Started sshd@15-10.200.20.42:22-10.200.16.10:50744.service - OpenSSH per-connection server daemon (10.200.16.10:50744). Jan 29 16:10:37.107055 sshd[4852]: Accepted publickey for core from 10.200.16.10 port 50744 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:37.108302 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:37.113267 systemd-logind[1742]: New session 18 of user core. Jan 29 16:10:37.117191 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:10:37.504517 sshd[4854]: Connection closed by 10.200.16.10 port 50744 Jan 29 16:10:37.505168 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:37.508260 systemd[1]: sshd@15-10.200.20.42:22-10.200.16.10:50744.service: Deactivated successfully. Jan 29 16:10:37.510436 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:10:37.511451 systemd-logind[1742]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:10:37.512516 systemd-logind[1742]: Removed session 18. Jan 29 16:10:37.588457 systemd[1]: Started sshd@16-10.200.20.42:22-10.200.16.10:50758.service - OpenSSH per-connection server daemon (10.200.16.10:50758). Jan 29 16:10:38.020454 sshd[4863]: Accepted publickey for core from 10.200.16.10 port 50758 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:38.021689 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:38.026852 systemd-logind[1742]: New session 19 of user core. Jan 29 16:10:38.034256 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:10:39.888070 sshd[4865]: Connection closed by 10.200.16.10 port 50758 Jan 29 16:10:39.889041 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:39.891401 systemd[1]: sshd@16-10.200.20.42:22-10.200.16.10:50758.service: Deactivated successfully. Jan 29 16:10:39.893566 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:10:39.895739 systemd-logind[1742]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:10:39.897304 systemd-logind[1742]: Removed session 19. Jan 29 16:10:39.969273 systemd[1]: Started sshd@17-10.200.20.42:22-10.200.16.10:50764.service - OpenSSH per-connection server daemon (10.200.16.10:50764). Jan 29 16:10:40.387000 sshd[4885]: Accepted publickey for core from 10.200.16.10 port 50764 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:40.388296 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:40.393053 systemd-logind[1742]: New session 20 of user core. Jan 29 16:10:40.399188 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:10:40.894537 sshd[4887]: Connection closed by 10.200.16.10 port 50764 Jan 29 16:10:40.895189 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:40.898769 systemd[1]: sshd@17-10.200.20.42:22-10.200.16.10:50764.service: Deactivated successfully. Jan 29 16:10:40.900487 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:10:40.901289 systemd-logind[1742]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:10:40.902325 systemd-logind[1742]: Removed session 20. Jan 29 16:10:40.969326 systemd[1]: Started sshd@18-10.200.20.42:22-10.200.16.10:50780.service - OpenSSH per-connection server daemon (10.200.16.10:50780). Jan 29 16:10:41.382677 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 50780 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:41.383894 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:41.388519 systemd-logind[1742]: New session 21 of user core. Jan 29 16:10:41.395161 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:10:41.738779 sshd[4899]: Connection closed by 10.200.16.10 port 50780 Jan 29 16:10:41.739324 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:41.742901 systemd[1]: sshd@18-10.200.20.42:22-10.200.16.10:50780.service: Deactivated successfully. Jan 29 16:10:41.745213 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:10:41.746713 systemd-logind[1742]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:10:41.747635 systemd-logind[1742]: Removed session 21. Jan 29 16:10:46.820276 systemd[1]: Started sshd@19-10.200.20.42:22-10.200.16.10:55668.service - OpenSSH per-connection server daemon (10.200.16.10:55668). Jan 29 16:10:47.238196 sshd[4917]: Accepted publickey for core from 10.200.16.10 port 55668 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:47.239430 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:47.243285 systemd-logind[1742]: New session 22 of user core. Jan 29 16:10:47.254168 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:10:47.608137 sshd[4919]: Connection closed by 10.200.16.10 port 55668 Jan 29 16:10:47.607972 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:47.611341 systemd[1]: sshd@19-10.200.20.42:22-10.200.16.10:55668.service: Deactivated successfully. Jan 29 16:10:47.613015 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:10:47.613851 systemd-logind[1742]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:10:47.614833 systemd-logind[1742]: Removed session 22. Jan 29 16:10:52.683248 systemd[1]: Started sshd@20-10.200.20.42:22-10.200.16.10:55684.service - OpenSSH per-connection server daemon (10.200.16.10:55684). Jan 29 16:10:53.104538 sshd[4931]: Accepted publickey for core from 10.200.16.10 port 55684 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:53.105727 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:53.109683 systemd-logind[1742]: New session 23 of user core. Jan 29 16:10:53.114177 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:10:53.463972 sshd[4933]: Connection closed by 10.200.16.10 port 55684 Jan 29 16:10:53.463538 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:53.466007 systemd[1]: sshd@20-10.200.20.42:22-10.200.16.10:55684.service: Deactivated successfully. Jan 29 16:10:53.467694 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:10:53.469232 systemd-logind[1742]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:10:53.470076 systemd-logind[1742]: Removed session 23. Jan 29 16:10:58.540078 systemd[1]: Started sshd@21-10.200.20.42:22-10.200.16.10:58410.service - OpenSSH per-connection server daemon (10.200.16.10:58410). Jan 29 16:10:58.964841 sshd[4945]: Accepted publickey for core from 10.200.16.10 port 58410 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:58.966111 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:58.971476 systemd-logind[1742]: New session 24 of user core. Jan 29 16:10:58.974247 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:10:59.323597 sshd[4947]: Connection closed by 10.200.16.10 port 58410 Jan 29 16:10:59.324124 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:59.327576 systemd[1]: sshd@21-10.200.20.42:22-10.200.16.10:58410.service: Deactivated successfully. Jan 29 16:10:59.329886 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:10:59.331085 systemd-logind[1742]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:10:59.331897 systemd-logind[1742]: Removed session 24. Jan 29 16:10:59.407289 systemd[1]: Started sshd@22-10.200.20.42:22-10.200.16.10:58416.service - OpenSSH per-connection server daemon (10.200.16.10:58416). Jan 29 16:10:59.825373 sshd[4959]: Accepted publickey for core from 10.200.16.10 port 58416 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:10:59.826587 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:10:59.830484 systemd-logind[1742]: New session 25 of user core. Jan 29 16:10:59.842170 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:11:02.168321 kubelet[3340]: I0129 16:11:02.168243 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ntkg8" podStartSLOduration=139.168224852 podStartE2EDuration="2m19.168224852s" podCreationTimestamp="2025-01-29 16:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:09:05.260969385 +0000 UTC m=+26.258711490" watchObservedRunningTime="2025-01-29 16:11:02.168224852 +0000 UTC m=+143.165966957" Jan 29 16:11:02.195518 containerd[1777]: time="2025-01-29T16:11:02.195472751Z" level=info msg="StopContainer for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" with timeout 30 (s)" Jan 29 16:11:02.196517 containerd[1777]: time="2025-01-29T16:11:02.196449670Z" level=info msg="Stop container \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" with signal terminated" Jan 29 16:11:02.202524 containerd[1777]: time="2025-01-29T16:11:02.202470825Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:11:02.207861 systemd[1]: cri-containerd-a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05.scope: Deactivated successfully. Jan 29 16:11:02.215057 containerd[1777]: time="2025-01-29T16:11:02.214911695Z" level=info msg="StopContainer for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" with timeout 2 (s)" Jan 29 16:11:02.215399 containerd[1777]: time="2025-01-29T16:11:02.215362695Z" level=info msg="Stop container \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" with signal terminated" Jan 29 16:11:02.221458 systemd-networkd[1485]: lxc_health: Link DOWN Jan 29 16:11:02.223097 systemd-networkd[1485]: lxc_health: Lost carrier Jan 29 16:11:02.232540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05-rootfs.mount: Deactivated successfully. Jan 29 16:11:02.238710 systemd[1]: cri-containerd-ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad.scope: Deactivated successfully. Jan 29 16:11:02.239904 systemd[1]: cri-containerd-ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad.scope: Consumed 6.038s CPU time, 122.5M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:11:02.256958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad-rootfs.mount: Deactivated successfully. Jan 29 16:11:02.281314 containerd[1777]: time="2025-01-29T16:11:02.281255801Z" level=info msg="shim disconnected" id=a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05 namespace=k8s.io Jan 29 16:11:02.281314 containerd[1777]: time="2025-01-29T16:11:02.281308121Z" level=warning msg="cleaning up after shim disconnected" id=a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05 namespace=k8s.io Jan 29 16:11:02.281314 containerd[1777]: time="2025-01-29T16:11:02.281316121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:02.281830 containerd[1777]: time="2025-01-29T16:11:02.281690601Z" level=info msg="shim disconnected" id=ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad namespace=k8s.io Jan 29 16:11:02.281830 containerd[1777]: time="2025-01-29T16:11:02.281722041Z" level=warning msg="cleaning up after shim disconnected" id=ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad namespace=k8s.io Jan 29 16:11:02.281830 containerd[1777]: time="2025-01-29T16:11:02.281730041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:02.300155 containerd[1777]: time="2025-01-29T16:11:02.300118266Z" level=info msg="StopContainer for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" returns successfully" Jan 29 16:11:02.300829 containerd[1777]: time="2025-01-29T16:11:02.300808426Z" level=info msg="StopPodSandbox for \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\"" Jan 29 16:11:02.300975 containerd[1777]: time="2025-01-29T16:11:02.300958306Z" level=info msg="Container to stop \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:02.301060 containerd[1777]: time="2025-01-29T16:11:02.301046386Z" level=info msg="Container to stop \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:02.301121 containerd[1777]: time="2025-01-29T16:11:02.301108345Z" level=info msg="Container to stop \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:02.301174 containerd[1777]: time="2025-01-29T16:11:02.301162185Z" level=info msg="Container to stop \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:02.301238 containerd[1777]: time="2025-01-29T16:11:02.301224105Z" level=info msg="Container to stop \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:02.301972 containerd[1777]: time="2025-01-29T16:11:02.301100746Z" level=info msg="StopContainer for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" returns successfully" Jan 29 16:11:02.302492 containerd[1777]: time="2025-01-29T16:11:02.302465984Z" level=info msg="StopPodSandbox for \"ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db\"" Jan 29 16:11:02.302617 containerd[1777]: time="2025-01-29T16:11:02.302600304Z" level=info msg="Container to stop \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:11:02.303554 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e-shm.mount: Deactivated successfully. Jan 29 16:11:02.310040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db-shm.mount: Deactivated successfully. Jan 29 16:11:02.313140 systemd[1]: cri-containerd-9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e.scope: Deactivated successfully. Jan 29 16:11:02.325159 systemd[1]: cri-containerd-ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db.scope: Deactivated successfully. Jan 29 16:11:02.357000 containerd[1777]: time="2025-01-29T16:11:02.356774741Z" level=info msg="shim disconnected" id=9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e namespace=k8s.io Jan 29 16:11:02.358253 containerd[1777]: time="2025-01-29T16:11:02.358110860Z" level=warning msg="cleaning up after shim disconnected" id=9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e namespace=k8s.io Jan 29 16:11:02.358253 containerd[1777]: time="2025-01-29T16:11:02.358135380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:02.358253 containerd[1777]: time="2025-01-29T16:11:02.357148540Z" level=info msg="shim disconnected" id=ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db namespace=k8s.io Jan 29 16:11:02.358253 containerd[1777]: time="2025-01-29T16:11:02.358211860Z" level=warning msg="cleaning up after shim disconnected" id=ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db namespace=k8s.io Jan 29 16:11:02.358253 containerd[1777]: time="2025-01-29T16:11:02.358220860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:02.369936 containerd[1777]: time="2025-01-29T16:11:02.369667050Z" level=info msg="TearDown network for sandbox \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" successfully" Jan 29 16:11:02.369936 containerd[1777]: time="2025-01-29T16:11:02.369700810Z" level=info msg="StopPodSandbox for \"9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e\" returns successfully" Jan 29 16:11:02.372235 containerd[1777]: time="2025-01-29T16:11:02.372102488Z" level=info msg="TearDown network for sandbox \"ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db\" successfully" Jan 29 16:11:02.372235 containerd[1777]: time="2025-01-29T16:11:02.372130248Z" level=info msg="StopPodSandbox for \"ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db\" returns successfully" Jan 29 16:11:02.417679 kubelet[3340]: I0129 16:11:02.417590 3340 scope.go:117] "RemoveContainer" containerID="a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05" Jan 29 16:11:02.419804 containerd[1777]: time="2025-01-29T16:11:02.419703930Z" level=info msg="RemoveContainer for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\"" Jan 29 16:11:02.435060 containerd[1777]: time="2025-01-29T16:11:02.434016798Z" level=info msg="RemoveContainer for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" returns successfully" Jan 29 16:11:02.435641 kubelet[3340]: I0129 16:11:02.435618 3340 scope.go:117] "RemoveContainer" containerID="a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05" Jan 29 16:11:02.436971 containerd[1777]: time="2025-01-29T16:11:02.436928316Z" level=error msg="ContainerStatus for \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\": not found" Jan 29 16:11:02.437206 kubelet[3340]: E0129 16:11:02.437180 3340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\": not found" containerID="a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05" Jan 29 16:11:02.437299 kubelet[3340]: I0129 16:11:02.437211 3340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05"} err="failed to get container status \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"a65567214f4f24ecbf2070b46790e676114683227252404a7f741d00051a4d05\": not found" Jan 29 16:11:02.437299 kubelet[3340]: I0129 16:11:02.437293 3340 scope.go:117] "RemoveContainer" containerID="ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad" Jan 29 16:11:02.438969 containerd[1777]: time="2025-01-29T16:11:02.438948395Z" level=info msg="RemoveContainer for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\"" Jan 29 16:11:02.446603 containerd[1777]: time="2025-01-29T16:11:02.446577788Z" level=info msg="RemoveContainer for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" returns successfully" Jan 29 16:11:02.446857 kubelet[3340]: I0129 16:11:02.446819 3340 scope.go:117] "RemoveContainer" containerID="5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b" Jan 29 16:11:02.447871 containerd[1777]: time="2025-01-29T16:11:02.447850187Z" level=info msg="RemoveContainer for \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\"" Jan 29 16:11:02.455364 containerd[1777]: time="2025-01-29T16:11:02.455338581Z" level=info msg="RemoveContainer for \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\" returns successfully" Jan 29 16:11:02.455627 kubelet[3340]: I0129 16:11:02.455608 3340 scope.go:117] "RemoveContainer" containerID="6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e" Jan 29 16:11:02.456718 containerd[1777]: time="2025-01-29T16:11:02.456690020Z" level=info msg="RemoveContainer for \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\"" Jan 29 16:11:02.465709 containerd[1777]: time="2025-01-29T16:11:02.465677333Z" level=info msg="RemoveContainer for \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\" returns successfully" Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.465867 3340 scope.go:117] "RemoveContainer" containerID="8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b" Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.465918 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3b7a5eb-5088-44fc-84ed-23624ab11d21-clustermesh-secrets\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.465943 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkn7r\" (UniqueName: \"kubernetes.io/projected/d5930b9f-c877-484f-b461-3587d08ef908-kube-api-access-gkn7r\") pod \"d5930b9f-c877-484f-b461-3587d08ef908\" (UID: \"d5930b9f-c877-484f-b461-3587d08ef908\") " Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.465966 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hostproc\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.465980 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cni-path\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.465994 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-run\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466275 kubelet[3340]: I0129 16:11:02.466008 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-xtables-lock\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466458 kubelet[3340]: I0129 16:11:02.466023 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-lib-modules\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466458 kubelet[3340]: I0129 16:11:02.466057 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hubble-tls\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466458 kubelet[3340]: I0129 16:11:02.466076 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdjbk\" (UniqueName: \"kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-kube-api-access-qdjbk\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466458 kubelet[3340]: I0129 16:11:02.466094 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-config-path\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466458 kubelet[3340]: I0129 16:11:02.466109 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-cgroup\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466458 kubelet[3340]: I0129 16:11:02.466124 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-etc-cni-netd\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466577 kubelet[3340]: I0129 16:11:02.466139 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-kernel\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466577 kubelet[3340]: I0129 16:11:02.466154 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-net\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466577 kubelet[3340]: I0129 16:11:02.466172 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5930b9f-c877-484f-b461-3587d08ef908-cilium-config-path\") pod \"d5930b9f-c877-484f-b461-3587d08ef908\" (UID: \"d5930b9f-c877-484f-b461-3587d08ef908\") " Jan 29 16:11:02.466577 kubelet[3340]: I0129 16:11:02.466187 3340 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-bpf-maps\") pod \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\" (UID: \"e3b7a5eb-5088-44fc-84ed-23624ab11d21\") " Jan 29 16:11:02.466577 kubelet[3340]: I0129 16:11:02.466226 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.468049 kubelet[3340]: I0129 16:11:02.466878 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.468049 kubelet[3340]: I0129 16:11:02.466929 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.468049 kubelet[3340]: I0129 16:11:02.466947 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.468049 kubelet[3340]: I0129 16:11:02.466964 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.468049 kubelet[3340]: I0129 16:11:02.466977 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.469527 kubelet[3340]: I0129 16:11:02.469503 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.469633 kubelet[3340]: I0129 16:11:02.469620 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.469705 kubelet[3340]: I0129 16:11:02.469691 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.469775 kubelet[3340]: I0129 16:11:02.469763 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:11:02.469824 kubelet[3340]: I0129 16:11:02.469788 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b7a5eb-5088-44fc-84ed-23624ab11d21-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:11:02.469881 kubelet[3340]: I0129 16:11:02.469842 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5930b9f-c877-484f-b461-3587d08ef908-kube-api-access-gkn7r" (OuterVolumeSpecName: "kube-api-access-gkn7r") pod "d5930b9f-c877-484f-b461-3587d08ef908" (UID: "d5930b9f-c877-484f-b461-3587d08ef908"). InnerVolumeSpecName "kube-api-access-gkn7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:02.471660 containerd[1777]: time="2025-01-29T16:11:02.471637528Z" level=info msg="RemoveContainer for \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\"" Jan 29 16:11:02.473466 kubelet[3340]: I0129 16:11:02.473420 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5930b9f-c877-484f-b461-3587d08ef908-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5930b9f-c877-484f-b461-3587d08ef908" (UID: "d5930b9f-c877-484f-b461-3587d08ef908"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:11:02.473921 kubelet[3340]: I0129 16:11:02.473890 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:11:02.473971 kubelet[3340]: I0129 16:11:02.473646 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:02.473971 kubelet[3340]: I0129 16:11:02.473869 3340 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-kube-api-access-qdjbk" (OuterVolumeSpecName: "kube-api-access-qdjbk") pod "e3b7a5eb-5088-44fc-84ed-23624ab11d21" (UID: "e3b7a5eb-5088-44fc-84ed-23624ab11d21"). InnerVolumeSpecName "kube-api-access-qdjbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:02.480235 containerd[1777]: time="2025-01-29T16:11:02.480181041Z" level=info msg="RemoveContainer for \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\" returns successfully" Jan 29 16:11:02.480594 kubelet[3340]: I0129 16:11:02.480350 3340 scope.go:117] "RemoveContainer" containerID="08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138" Jan 29 16:11:02.481492 containerd[1777]: time="2025-01-29T16:11:02.481462120Z" level=info msg="RemoveContainer for \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\"" Jan 29 16:11:02.490952 containerd[1777]: time="2025-01-29T16:11:02.490923673Z" level=info msg="RemoveContainer for \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\" returns successfully" Jan 29 16:11:02.491173 kubelet[3340]: I0129 16:11:02.491147 3340 scope.go:117] "RemoveContainer" containerID="ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad" Jan 29 16:11:02.491638 containerd[1777]: time="2025-01-29T16:11:02.491391032Z" level=error msg="ContainerStatus for \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\": not found" Jan 29 16:11:02.491718 kubelet[3340]: E0129 16:11:02.491518 3340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\": not found" containerID="ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad" Jan 29 16:11:02.491718 kubelet[3340]: I0129 16:11:02.491560 3340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad"} err="failed to get container status \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef14aabd2d8791105cc2ff53a92a197d1941a36ea80839ff6a1ef8e5bcf0d1ad\": not found" Jan 29 16:11:02.491718 kubelet[3340]: I0129 16:11:02.491580 3340 scope.go:117] "RemoveContainer" containerID="5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b" Jan 29 16:11:02.491796 containerd[1777]: time="2025-01-29T16:11:02.491737432Z" level=error msg="ContainerStatus for \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\": not found" Jan 29 16:11:02.492001 kubelet[3340]: E0129 16:11:02.491893 3340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\": not found" containerID="5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b" Jan 29 16:11:02.492001 kubelet[3340]: I0129 16:11:02.491914 3340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b"} err="failed to get container status \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cd428a0af736485029c5774caf6cb0b46476b4935cdfe3ca0af7d2e8f6e926b\": not found" Jan 29 16:11:02.492001 kubelet[3340]: I0129 16:11:02.491931 3340 scope.go:117] "RemoveContainer" containerID="6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e" Jan 29 16:11:02.492170 containerd[1777]: time="2025-01-29T16:11:02.492123992Z" level=error msg="ContainerStatus for \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\": not found" Jan 29 16:11:02.492277 kubelet[3340]: E0129 16:11:02.492247 3340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\": not found" containerID="6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e" Jan 29 16:11:02.492310 kubelet[3340]: I0129 16:11:02.492284 3340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e"} err="failed to get container status \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6da7199f3a35ff28cce863a4850bcc4c4d85cd2f72169ffcbf537a0f542a3c9e\": not found" Jan 29 16:11:02.492310 kubelet[3340]: I0129 16:11:02.492300 3340 scope.go:117] "RemoveContainer" containerID="8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b" Jan 29 16:11:02.492581 containerd[1777]: time="2025-01-29T16:11:02.492554031Z" level=error msg="ContainerStatus for \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\": not found" Jan 29 16:11:02.492763 kubelet[3340]: E0129 16:11:02.492740 3340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\": not found" containerID="8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b" Jan 29 16:11:02.492802 kubelet[3340]: I0129 16:11:02.492780 3340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b"} err="failed to get container status \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8adf8d67797d830924e3c967899d732b8e0f9b4fc5f0cde10d93700cc3f5467b\": not found" Jan 29 16:11:02.492802 kubelet[3340]: I0129 16:11:02.492795 3340 scope.go:117] "RemoveContainer" containerID="08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138" Jan 29 16:11:02.492992 containerd[1777]: time="2025-01-29T16:11:02.492963071Z" level=error msg="ContainerStatus for \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\": not found" Jan 29 16:11:02.493117 kubelet[3340]: E0129 16:11:02.493095 3340 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\": not found" containerID="08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138" Jan 29 16:11:02.493156 kubelet[3340]: I0129 16:11:02.493119 3340 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138"} err="failed to get container status \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\": rpc error: code = NotFound desc = an error occurred when try to find container \"08252a45b66c44998ad04075c215758ca89795cb145c9dc6532eca54e9882138\": not found" Jan 29 16:11:02.566826 kubelet[3340]: I0129 16:11:02.566794 3340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gkn7r\" (UniqueName: \"kubernetes.io/projected/d5930b9f-c877-484f-b461-3587d08ef908-kube-api-access-gkn7r\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566830 3340 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hostproc\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566841 3340 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cni-path\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566849 3340 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-run\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566857 3340 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-xtables-lock\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566864 3340 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-hubble-tls\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566872 3340 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qdjbk\" (UniqueName: \"kubernetes.io/projected/e3b7a5eb-5088-44fc-84ed-23624ab11d21-kube-api-access-qdjbk\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566880 3340 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-config-path\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.566904 kubelet[3340]: I0129 16:11:02.566888 3340 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-lib-modules\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566902 3340 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-cilium-cgroup\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566910 3340 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-etc-cni-netd\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566921 3340 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-kernel\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566930 3340 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-host-proc-sys-net\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566939 3340 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5930b9f-c877-484f-b461-3587d08ef908-cilium-config-path\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566947 3340 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3b7a5eb-5088-44fc-84ed-23624ab11d21-bpf-maps\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.567105 kubelet[3340]: I0129 16:11:02.566956 3340 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3b7a5eb-5088-44fc-84ed-23624ab11d21-clustermesh-secrets\") on node \"ci-4230.0.0-a-877fd59aac\" DevicePath \"\"" Jan 29 16:11:02.721583 systemd[1]: Removed slice kubepods-besteffort-podd5930b9f_c877_484f_b461_3587d08ef908.slice - libcontainer container kubepods-besteffort-podd5930b9f_c877_484f_b461_3587d08ef908.slice. Jan 29 16:11:02.728618 systemd[1]: Removed slice kubepods-burstable-pode3b7a5eb_5088_44fc_84ed_23624ab11d21.slice - libcontainer container kubepods-burstable-pode3b7a5eb_5088_44fc_84ed_23624ab11d21.slice. Jan 29 16:11:02.728724 systemd[1]: kubepods-burstable-pode3b7a5eb_5088_44fc_84ed_23624ab11d21.slice: Consumed 6.103s CPU time, 122.9M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:11:03.094255 kubelet[3340]: I0129 16:11:03.093351 3340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5930b9f-c877-484f-b461-3587d08ef908" path="/var/lib/kubelet/pods/d5930b9f-c877-484f-b461-3587d08ef908/volumes" Jan 29 16:11:03.094255 kubelet[3340]: I0129 16:11:03.093714 3340 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" path="/var/lib/kubelet/pods/e3b7a5eb-5088-44fc-84ed-23624ab11d21/volumes" Jan 29 16:11:03.186527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee24a853c3bd4c6f7b6595eca0888dbcc108c1ff25d435238dfd3bd432e1a5db-rootfs.mount: Deactivated successfully. Jan 29 16:11:03.186613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9924b5ba96d459f98b0ace5a9c0f70bf20b24b95fbf8bc2bf8d1aba40d7e3a4e-rootfs.mount: Deactivated successfully. Jan 29 16:11:03.186669 systemd[1]: var-lib-kubelet-pods-d5930b9f\x2dc877\x2d484f\x2db461\x2d3587d08ef908-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkn7r.mount: Deactivated successfully. Jan 29 16:11:03.186721 systemd[1]: var-lib-kubelet-pods-e3b7a5eb\x2d5088\x2d44fc\x2d84ed\x2d23624ab11d21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqdjbk.mount: Deactivated successfully. Jan 29 16:11:03.186770 systemd[1]: var-lib-kubelet-pods-e3b7a5eb\x2d5088\x2d44fc\x2d84ed\x2d23624ab11d21-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:11:03.186819 systemd[1]: var-lib-kubelet-pods-e3b7a5eb\x2d5088\x2d44fc\x2d84ed\x2d23624ab11d21-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:11:04.198063 sshd[4961]: Connection closed by 10.200.16.10 port 58416 Jan 29 16:11:04.198832 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:04.199199 kubelet[3340]: E0129 16:11:04.198900 3340 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:11:04.201737 systemd[1]: sshd@22-10.200.20.42:22-10.200.16.10:58416.service: Deactivated successfully. Jan 29 16:11:04.203441 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:11:04.203695 systemd[1]: session-25.scope: Consumed 1.493s CPU time, 23.7M memory peak. Jan 29 16:11:04.204778 systemd-logind[1742]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:11:04.205742 systemd-logind[1742]: Removed session 25. Jan 29 16:11:04.273607 systemd[1]: Started sshd@23-10.200.20.42:22-10.200.16.10:58426.service - OpenSSH per-connection server daemon (10.200.16.10:58426). Jan 29 16:11:04.690480 sshd[5121]: Accepted publickey for core from 10.200.16.10 port 58426 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:04.691733 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:04.696533 systemd-logind[1742]: New session 26 of user core. Jan 29 16:11:04.706164 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:11:05.833913 kubelet[3340]: E0129 16:11:05.832680 3340 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" containerName="mount-bpf-fs" Jan 29 16:11:05.833913 kubelet[3340]: E0129 16:11:05.832712 3340 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5930b9f-c877-484f-b461-3587d08ef908" containerName="cilium-operator" Jan 29 16:11:05.833913 kubelet[3340]: E0129 16:11:05.832719 3340 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" containerName="apply-sysctl-overwrites" Jan 29 16:11:05.833913 kubelet[3340]: E0129 16:11:05.832725 3340 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" containerName="clean-cilium-state" Jan 29 16:11:05.833913 kubelet[3340]: E0129 16:11:05.832731 3340 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" containerName="cilium-agent" Jan 29 16:11:05.833913 kubelet[3340]: E0129 16:11:05.832737 3340 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" containerName="mount-cgroup" Jan 29 16:11:05.833913 kubelet[3340]: I0129 16:11:05.832761 3340 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5930b9f-c877-484f-b461-3587d08ef908" containerName="cilium-operator" Jan 29 16:11:05.833913 kubelet[3340]: I0129 16:11:05.832767 3340 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b7a5eb-5088-44fc-84ed-23624ab11d21" containerName="cilium-agent" Jan 29 16:11:05.840631 systemd[1]: Created slice kubepods-burstable-podd48bee6a_58e2_4195_89d2_46992c673bfb.slice - libcontainer container kubepods-burstable-podd48bee6a_58e2_4195_89d2_46992c673bfb.slice. Jan 29 16:11:05.854675 sshd[5123]: Connection closed by 10.200.16.10 port 58426 Jan 29 16:11:05.857934 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:05.861211 systemd[1]: sshd@23-10.200.20.42:22-10.200.16.10:58426.service: Deactivated successfully. Jan 29 16:11:05.868777 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:11:05.870770 systemd-logind[1742]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:11:05.872458 systemd-logind[1742]: Removed session 26. Jan 29 16:11:05.935274 systemd[1]: Started sshd@24-10.200.20.42:22-10.200.16.10:38002.service - OpenSSH per-connection server daemon (10.200.16.10:38002). Jan 29 16:11:05.984786 kubelet[3340]: I0129 16:11:05.984751 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-cilium-run\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985014 kubelet[3340]: I0129 16:11:05.984989 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-host-proc-sys-kernel\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985110 kubelet[3340]: I0129 16:11:05.985097 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-cilium-cgroup\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985201 kubelet[3340]: I0129 16:11:05.985189 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-xtables-lock\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985280 kubelet[3340]: I0129 16:11:05.985269 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-host-proc-sys-net\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985409 kubelet[3340]: I0129 16:11:05.985363 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-hostproc\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985409 kubelet[3340]: I0129 16:11:05.985396 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d48bee6a-58e2-4195-89d2-46992c673bfb-hubble-tls\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985481 kubelet[3340]: I0129 16:11:05.985421 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-cni-path\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985481 kubelet[3340]: I0129 16:11:05.985436 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-bpf-maps\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985481 kubelet[3340]: I0129 16:11:05.985451 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-etc-cni-netd\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985481 kubelet[3340]: I0129 16:11:05.985469 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d48bee6a-58e2-4195-89d2-46992c673bfb-cilium-config-path\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985566 kubelet[3340]: I0129 16:11:05.985486 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d48bee6a-58e2-4195-89d2-46992c673bfb-cilium-ipsec-secrets\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985566 kubelet[3340]: I0129 16:11:05.985502 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g596k\" (UniqueName: \"kubernetes.io/projected/d48bee6a-58e2-4195-89d2-46992c673bfb-kube-api-access-g596k\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985566 kubelet[3340]: I0129 16:11:05.985521 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d48bee6a-58e2-4195-89d2-46992c673bfb-lib-modules\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:05.985566 kubelet[3340]: I0129 16:11:05.985536 3340 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d48bee6a-58e2-4195-89d2-46992c673bfb-clustermesh-secrets\") pod \"cilium-tbst4\" (UID: \"d48bee6a-58e2-4195-89d2-46992c673bfb\") " pod="kube-system/cilium-tbst4" Jan 29 16:11:06.145837 containerd[1777]: time="2025-01-29T16:11:06.145186772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbst4,Uid:d48bee6a-58e2-4195-89d2-46992c673bfb,Namespace:kube-system,Attempt:0,}" Jan 29 16:11:06.183489 containerd[1777]: time="2025-01-29T16:11:06.183314019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:11:06.183489 containerd[1777]: time="2025-01-29T16:11:06.183384259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:11:06.183489 containerd[1777]: time="2025-01-29T16:11:06.183398939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:11:06.184188 containerd[1777]: time="2025-01-29T16:11:06.184110218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:11:06.199195 systemd[1]: Started cri-containerd-79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d.scope - libcontainer container 79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d. Jan 29 16:11:06.220703 containerd[1777]: time="2025-01-29T16:11:06.220660587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbst4,Uid:d48bee6a-58e2-4195-89d2-46992c673bfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\"" Jan 29 16:11:06.224400 containerd[1777]: time="2025-01-29T16:11:06.224257384Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:11:06.255672 containerd[1777]: time="2025-01-29T16:11:06.255632836Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172\"" Jan 29 16:11:06.256539 containerd[1777]: time="2025-01-29T16:11:06.256495596Z" level=info msg="StartContainer for \"664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172\"" Jan 29 16:11:06.281185 systemd[1]: Started cri-containerd-664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172.scope - libcontainer container 664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172. Jan 29 16:11:06.307151 containerd[1777]: time="2025-01-29T16:11:06.307106192Z" level=info msg="StartContainer for \"664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172\" returns successfully" Jan 29 16:11:06.311203 systemd[1]: cri-containerd-664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172.scope: Deactivated successfully. Jan 29 16:11:06.359657 sshd[5133]: Accepted publickey for core from 10.200.16.10 port 38002 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:06.362068 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:06.366085 systemd-logind[1742]: New session 27 of user core. Jan 29 16:11:06.373161 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:11:06.397429 containerd[1777]: time="2025-01-29T16:11:06.397310074Z" level=info msg="shim disconnected" id=664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172 namespace=k8s.io Jan 29 16:11:06.397986 containerd[1777]: time="2025-01-29T16:11:06.397561874Z" level=warning msg="cleaning up after shim disconnected" id=664e9c3b760616035d5a1a8258ab0469078cd0737fad40fe46aa818037d77172 namespace=k8s.io Jan 29 16:11:06.397986 containerd[1777]: time="2025-01-29T16:11:06.397577834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:06.435414 containerd[1777]: time="2025-01-29T16:11:06.435118841Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:11:06.464643 containerd[1777]: time="2025-01-29T16:11:06.464564056Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd\"" Jan 29 16:11:06.465889 containerd[1777]: time="2025-01-29T16:11:06.465128335Z" level=info msg="StartContainer for \"a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd\"" Jan 29 16:11:06.488234 systemd[1]: Started cri-containerd-a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd.scope - libcontainer container a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd. Jan 29 16:11:06.513710 containerd[1777]: time="2025-01-29T16:11:06.513663693Z" level=info msg="StartContainer for \"a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd\" returns successfully" Jan 29 16:11:06.517891 systemd[1]: cri-containerd-a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd.scope: Deactivated successfully. Jan 29 16:11:06.548834 containerd[1777]: time="2025-01-29T16:11:06.548736023Z" level=info msg="shim disconnected" id=a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd namespace=k8s.io Jan 29 16:11:06.548834 containerd[1777]: time="2025-01-29T16:11:06.548789903Z" level=warning msg="cleaning up after shim disconnected" id=a26af5618129c961c4a18434b298a59932691f80d61210e5873c3b75ea30cfbd namespace=k8s.io Jan 29 16:11:06.548834 containerd[1777]: time="2025-01-29T16:11:06.548797863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:06.658367 sshd[5229]: Connection closed by 10.200.16.10 port 38002 Jan 29 16:11:06.658858 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:06.661370 systemd-logind[1742]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:11:06.661526 systemd[1]: sshd@24-10.200.20.42:22-10.200.16.10:38002.service: Deactivated successfully. Jan 29 16:11:06.663298 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:11:06.664815 systemd-logind[1742]: Removed session 27. Jan 29 16:11:06.732941 systemd[1]: Started sshd@25-10.200.20.42:22-10.200.16.10:38016.service - OpenSSH per-connection server daemon (10.200.16.10:38016). Jan 29 16:11:07.151490 sshd[5310]: Accepted publickey for core from 10.200.16.10 port 38016 ssh2: RSA SHA256:uduChEH/v8L012SDeQTIcths1H40qP4f6MLqXvZV0Vc Jan 29 16:11:07.152780 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:11:07.157990 systemd-logind[1742]: New session 28 of user core. Jan 29 16:11:07.168178 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:11:07.443055 containerd[1777]: time="2025-01-29T16:11:07.442141970Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:11:07.484348 containerd[1777]: time="2025-01-29T16:11:07.484304053Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d\"" Jan 29 16:11:07.485149 containerd[1777]: time="2025-01-29T16:11:07.485110453Z" level=info msg="StartContainer for \"17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d\"" Jan 29 16:11:07.515184 systemd[1]: Started cri-containerd-17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d.scope - libcontainer container 17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d. Jan 29 16:11:07.539357 systemd[1]: cri-containerd-17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d.scope: Deactivated successfully. Jan 29 16:11:07.542119 containerd[1777]: time="2025-01-29T16:11:07.541374044Z" level=info msg="StartContainer for \"17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d\" returns successfully" Jan 29 16:11:07.575552 containerd[1777]: time="2025-01-29T16:11:07.575493415Z" level=info msg="shim disconnected" id=17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d namespace=k8s.io Jan 29 16:11:07.575552 containerd[1777]: time="2025-01-29T16:11:07.575547055Z" level=warning msg="cleaning up after shim disconnected" id=17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d namespace=k8s.io Jan 29 16:11:07.575552 containerd[1777]: time="2025-01-29T16:11:07.575556095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:08.092371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17ff6f67a6e9e78b0aef4b100dcc4a830f97ef437f897ce991eb9f48e0ab983d-rootfs.mount: Deactivated successfully. Jan 29 16:11:08.446361 containerd[1777]: time="2025-01-29T16:11:08.446253821Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:11:08.482824 containerd[1777]: time="2025-01-29T16:11:08.482737350Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476\"" Jan 29 16:11:08.484061 containerd[1777]: time="2025-01-29T16:11:08.483926829Z" level=info msg="StartContainer for \"6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476\"" Jan 29 16:11:08.511164 systemd[1]: Started cri-containerd-6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476.scope - libcontainer container 6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476. Jan 29 16:11:08.531566 systemd[1]: cri-containerd-6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476.scope: Deactivated successfully. Jan 29 16:11:08.537886 containerd[1777]: time="2025-01-29T16:11:08.537814262Z" level=info msg="StartContainer for \"6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476\" returns successfully" Jan 29 16:11:08.554210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476-rootfs.mount: Deactivated successfully. Jan 29 16:11:08.563385 containerd[1777]: time="2025-01-29T16:11:08.563327080Z" level=info msg="shim disconnected" id=6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476 namespace=k8s.io Jan 29 16:11:08.563385 containerd[1777]: time="2025-01-29T16:11:08.563385360Z" level=warning msg="cleaning up after shim disconnected" id=6986f6cfad1669996c764355b1cf3669c3e76b7b2321acc3cccdeaf37d53d476 namespace=k8s.io Jan 29 16:11:08.563563 containerd[1777]: time="2025-01-29T16:11:08.563393560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:11:09.200544 kubelet[3340]: E0129 16:11:09.200507 3340 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:11:09.452730 containerd[1777]: time="2025-01-29T16:11:09.452157031Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:11:09.491289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915770148.mount: Deactivated successfully. Jan 29 16:11:09.503432 containerd[1777]: time="2025-01-29T16:11:09.503379627Z" level=info msg="CreateContainer within sandbox \"79b045770b3450d7847a634b76995b5a6efd4d72af87bf818ea1470ca5376d9d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06\"" Jan 29 16:11:09.504221 containerd[1777]: time="2025-01-29T16:11:09.504104466Z" level=info msg="StartContainer for \"6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06\"" Jan 29 16:11:09.531224 systemd[1]: Started cri-containerd-6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06.scope - libcontainer container 6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06. Jan 29 16:11:09.560197 containerd[1777]: time="2025-01-29T16:11:09.560127458Z" level=info msg="StartContainer for \"6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06\" returns successfully" Jan 29 16:11:09.988129 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 16:11:10.474883 kubelet[3340]: I0129 16:11:10.474379 3340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tbst4" podStartSLOduration=5.474360747 podStartE2EDuration="5.474360747s" podCreationTimestamp="2025-01-29 16:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:11:10.474158947 +0000 UTC m=+151.471901012" watchObservedRunningTime="2025-01-29 16:11:10.474360747 +0000 UTC m=+151.472102852" Jan 29 16:11:11.587682 systemd[1]: run-containerd-runc-k8s.io-6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06-runc.ovoWVI.mount: Deactivated successfully. Jan 29 16:11:12.693388 kubelet[3340]: I0129 16:11:12.693278 3340 setters.go:600] "Node became not ready" node="ci-4230.0.0-a-877fd59aac" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:11:12Z","lastTransitionTime":"2025-01-29T16:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:11:12.739776 systemd-networkd[1485]: lxc_health: Link UP Jan 29 16:11:12.753782 systemd-networkd[1485]: lxc_health: Gained carrier Jan 29 16:11:13.772718 systemd[1]: run-containerd-runc-k8s.io-6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06-runc.iPZRMC.mount: Deactivated successfully. Jan 29 16:11:14.296173 systemd-networkd[1485]: lxc_health: Gained IPv6LL Jan 29 16:11:20.174250 systemd[1]: run-containerd-runc-k8s.io-6e3efd196e4edc453335dceb2436c48e23fd09547f5b81aaa27366d4620fab06-runc.EfLde8.mount: Deactivated successfully. Jan 29 16:11:20.324887 sshd[5312]: Connection closed by 10.200.16.10 port 38016 Jan 29 16:11:20.326019 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Jan 29 16:11:20.329361 systemd[1]: sshd@25-10.200.20.42:22-10.200.16.10:38016.service: Deactivated successfully. Jan 29 16:11:20.330922 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:11:20.332235 systemd-logind[1742]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:11:20.333495 systemd-logind[1742]: Removed session 28.