Feb 13 18:55:32.470497 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 18:55:32.470520 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025
Feb 13 18:55:32.470533 kernel: KASLR enabled
Feb 13 18:55:32.470539 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '')
Feb 13 18:55:32.470546 kernel: printk: bootconsole [pl11] enabled
Feb 13 18:55:32.470552 kernel: efi: EFI v2.7 by EDK II
Feb 13 18:55:32.470559 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 
Feb 13 18:55:32.470565 kernel: random: crng init done
Feb 13 18:55:32.470571 kernel: secureboot: Secure boot disabled
Feb 13 18:55:32.470577 kernel: ACPI: Early table checksum verification disabled
Feb 13 18:55:32.470583 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL)
Feb 13 18:55:32.470589 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470595 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470602 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01   00000001 INTL 20230628)
Feb 13 18:55:32.470609 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470618 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470668 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470683 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470690 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470696 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470717 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000)
Feb 13 18:55:32.470724 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470730 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200
Feb 13 18:55:32.470737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff]
Feb 13 18:55:32.470744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff]
Feb 13 18:55:32.470750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff]
Feb 13 18:55:32.470757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff]
Feb 13 18:55:32.470763 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff]
Feb 13 18:55:32.472816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff]
Feb 13 18:55:32.472839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff]
Feb 13 18:55:32.472846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff]
Feb 13 18:55:32.472853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff]
Feb 13 18:55:32.472859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff]
Feb 13 18:55:32.472865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff]
Feb 13 18:55:32.472871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff]
Feb 13 18:55:32.472878 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff]
Feb 13 18:55:32.472884 kernel: Zone ranges:
Feb 13 18:55:32.472890 kernel:   DMA      [mem 0x0000000000000000-0x00000000ffffffff]
Feb 13 18:55:32.472896 kernel:   DMA32    empty
Feb 13 18:55:32.472903 kernel:   Normal   [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 18:55:32.472918 kernel: Movable zone start for each node
Feb 13 18:55:32.472925 kernel: Early memory node ranges
Feb 13 18:55:32.472932 kernel:   node   0: [mem 0x0000000000000000-0x00000000007fffff]
Feb 13 18:55:32.472938 kernel:   node   0: [mem 0x0000000000824000-0x000000003e45ffff]
Feb 13 18:55:32.472945 kernel:   node   0: [mem 0x000000003e460000-0x000000003e46ffff]
Feb 13 18:55:32.472953 kernel:   node   0: [mem 0x000000003e470000-0x000000003e54ffff]
Feb 13 18:55:32.472960 kernel:   node   0: [mem 0x000000003e550000-0x000000003e87ffff]
Feb 13 18:55:32.472966 kernel:   node   0: [mem 0x000000003e880000-0x000000003fc7ffff]
Feb 13 18:55:32.472973 kernel:   node   0: [mem 0x000000003fc80000-0x000000003fcfffff]
Feb 13 18:55:32.472980 kernel:   node   0: [mem 0x000000003fd00000-0x000000003fffffff]
Feb 13 18:55:32.472986 kernel:   node   0: [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 18:55:32.472994 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff]
Feb 13 18:55:32.473000 kernel: On node 0, zone DMA: 36 pages in unavailable ranges
Feb 13 18:55:32.473007 kernel: psci: probing for conduit method from ACPI.
Feb 13 18:55:32.473013 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 18:55:32.473020 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 18:55:32.473027 kernel: psci: MIGRATE_INFO_TYPE not supported.
Feb 13 18:55:32.473035 kernel: psci: SMC Calling Convention v1.4
Feb 13 18:55:32.473042 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0
Feb 13 18:55:32.473048 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0
Feb 13 18:55:32.473055 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 18:55:32.473061 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 18:55:32.473068 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 18:55:32.473075 kernel: Detected PIPT I-cache on CPU0
Feb 13 18:55:32.473082 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 18:55:32.473088 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 18:55:32.473095 kernel: CPU features: detected: Spectre-BHB
Feb 13 18:55:32.473101 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 18:55:32.473110 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 18:55:32.473116 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 18:55:32.473123 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion)
Feb 13 18:55:32.473129 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 18:55:32.473136 kernel: alternatives: applying boot alternatives
Feb 13 18:55:32.473144 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b
Feb 13 18:55:32.473151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 18:55:32.473158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 18:55:32.473164 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 18:55:32.473171 kernel: Fallback order for Node 0: 0 
Feb 13 18:55:32.473177 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1032156
Feb 13 18:55:32.473185 kernel: Policy zone: Normal
Feb 13 18:55:32.473192 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 18:55:32.473198 kernel: software IO TLB: area num 2.
Feb 13 18:55:32.473205 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB)
Feb 13 18:55:32.473212 kernel: Memory: 3982052K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212108K reserved, 0K cma-reserved)
Feb 13 18:55:32.473219 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 18:55:32.473225 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 18:55:32.473232 kernel: rcu:         RCU event tracing is enabled.
Feb 13 18:55:32.473239 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 18:55:32.473246 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 18:55:32.473252 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 18:55:32.473261 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 18:55:32.473268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 18:55:32.473274 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 18:55:32.473280 kernel: GICv3: 960 SPIs implemented
Feb 13 18:55:32.473287 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 18:55:32.473293 kernel: Root IRQ handler: gic_handle_irq
Feb 13 18:55:32.473300 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 18:55:32.473307 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000
Feb 13 18:55:32.473313 kernel: ITS: No ITS available, not enabling LPIs
Feb 13 18:55:32.473320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 18:55:32.473326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 18:55:32.473333 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 18:55:32.473342 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 18:55:32.473348 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 18:55:32.473355 kernel: Console: colour dummy device 80x25
Feb 13 18:55:32.473362 kernel: printk: console [tty1] enabled
Feb 13 18:55:32.473369 kernel: ACPI: Core revision 20230628
Feb 13 18:55:32.473376 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 18:55:32.473383 kernel: pid_max: default: 32768 minimum: 301
Feb 13 18:55:32.473390 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 18:55:32.473396 kernel: landlock: Up and running.
Feb 13 18:55:32.473404 kernel: SELinux:  Initializing.
Feb 13 18:55:32.473411 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.473418 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.473425 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 18:55:32.473432 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 18:55:32.473439 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1
Feb 13 18:55:32.473446 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0
Feb 13 18:55:32.473460 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Feb 13 18:55:32.473467 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 18:55:32.473474 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 18:55:32.473481 kernel: Remapping and enabling EFI services.
Feb 13 18:55:32.473488 kernel: smp: Bringing up secondary CPUs ...
Feb 13 18:55:32.473497 kernel: Detected PIPT I-cache on CPU1
Feb 13 18:55:32.473504 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000
Feb 13 18:55:32.473512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 18:55:32.473519 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 18:55:32.473526 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 18:55:32.473535 kernel: SMP: Total of 2 processors activated.
Feb 13 18:55:32.473542 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 18:55:32.473549 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence
Feb 13 18:55:32.473556 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 18:55:32.473563 kernel: CPU features: detected: CRC32 instructions
Feb 13 18:55:32.473570 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 18:55:32.473578 kernel: CPU features: detected: LSE atomic instructions
Feb 13 18:55:32.473585 kernel: CPU features: detected: Privileged Access Never
Feb 13 18:55:32.473592 kernel: CPU: All CPU(s) started at EL1
Feb 13 18:55:32.473601 kernel: alternatives: applying system-wide alternatives
Feb 13 18:55:32.473608 kernel: devtmpfs: initialized
Feb 13 18:55:32.473615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 18:55:32.473622 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 18:55:32.473630 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 18:55:32.473637 kernel: SMBIOS 3.1.0 present.
Feb 13 18:55:32.473644 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024
Feb 13 18:55:32.473651 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 18:55:32.473658 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 18:55:32.473667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 18:55:32.473674 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 18:55:32.473682 kernel: audit: initializing netlink subsys (disabled)
Feb 13 18:55:32.473689 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1
Feb 13 18:55:32.473696 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 18:55:32.473703 kernel: cpuidle: using governor menu
Feb 13 18:55:32.473710 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 18:55:32.473717 kernel: ASID allocator initialised with 32768 entries
Feb 13 18:55:32.473724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 18:55:32.473733 kernel: Serial: AMBA PL011 UART driver
Feb 13 18:55:32.473740 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 18:55:32.473747 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 18:55:32.473754 kernel: Modules: 508880 pages in range for PLT usage
Feb 13 18:55:32.473761 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473768 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 18:55:32.473786 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 18:55:32.473801 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 18:55:32.473818 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473825 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 18:55:32.473832 kernel: ACPI: Added _OSI(Module Device)
Feb 13 18:55:32.473840 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 18:55:32.473847 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 18:55:32.473854 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 18:55:32.473861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 18:55:32.473868 kernel: ACPI: Interpreter enabled
Feb 13 18:55:32.473877 kernel: ACPI: Using GIC for interrupt routing
Feb 13 18:55:32.473884 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 18:55:32.473892 kernel: printk: console [ttyAMA0] enabled
Feb 13 18:55:32.473899 kernel: printk: bootconsole [pl11] disabled
Feb 13 18:55:32.473906 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA
Feb 13 18:55:32.473913 kernel: iommu: Default domain type: Translated
Feb 13 18:55:32.473920 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 18:55:32.473927 kernel: efivars: Registered efivars operations
Feb 13 18:55:32.473935 kernel: vgaarb: loaded
Feb 13 18:55:32.473943 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 18:55:32.473951 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 18:55:32.473958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 18:55:32.473965 kernel: pnp: PnP ACPI init
Feb 13 18:55:32.473972 kernel: pnp: PnP ACPI: found 0 devices
Feb 13 18:55:32.473979 kernel: NET: Registered PF_INET protocol family
Feb 13 18:55:32.473987 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 18:55:32.473994 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 18:55:32.474001 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 18:55:32.474010 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 18:55:32.474017 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 18:55:32.474025 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 18:55:32.474032 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.474039 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.474046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 18:55:32.474054 kernel: PCI: CLS 0 bytes, default 64
Feb 13 18:55:32.474061 kernel: kvm [1]: HYP mode not available
Feb 13 18:55:32.474068 kernel: Initialise system trusted keyrings
Feb 13 18:55:32.474076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 18:55:32.474084 kernel: Key type asymmetric registered
Feb 13 18:55:32.474091 kernel: Asymmetric key parser 'x509' registered
Feb 13 18:55:32.474098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 18:55:32.474105 kernel: io scheduler mq-deadline registered
Feb 13 18:55:32.474112 kernel: io scheduler kyber registered
Feb 13 18:55:32.474119 kernel: io scheduler bfq registered
Feb 13 18:55:32.474126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 18:55:32.474134 kernel: thunder_xcv, ver 1.0
Feb 13 18:55:32.474142 kernel: thunder_bgx, ver 1.0
Feb 13 18:55:32.474149 kernel: nicpf, ver 1.0
Feb 13 18:55:32.474156 kernel: nicvf, ver 1.0
Feb 13 18:55:32.474327 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 18:55:32.474405 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:55:31 UTC (1739472931)
Feb 13 18:55:32.474416 kernel: efifb: probing for efifb
Feb 13 18:55:32.474424 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Feb 13 18:55:32.474431 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Feb 13 18:55:32.474441 kernel: efifb: scrolling: redraw
Feb 13 18:55:32.474448 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Feb 13 18:55:32.474456 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 18:55:32.474463 kernel: fb0: EFI VGA frame buffer device
Feb 13 18:55:32.474470 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping ....
Feb 13 18:55:32.474477 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 18:55:32.474484 kernel: No ACPI PMU IRQ for CPU0
Feb 13 18:55:32.474491 kernel: No ACPI PMU IRQ for CPU1
Feb 13 18:55:32.474498 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available
Feb 13 18:55:32.474507 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 18:55:32.474514 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 18:55:32.474522 kernel: NET: Registered PF_INET6 protocol family
Feb 13 18:55:32.474529 kernel: Segment Routing with IPv6
Feb 13 18:55:32.474536 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 18:55:32.474543 kernel: NET: Registered PF_PACKET protocol family
Feb 13 18:55:32.474550 kernel: Key type dns_resolver registered
Feb 13 18:55:32.474557 kernel: registered taskstats version 1
Feb 13 18:55:32.474564 kernel: Loading compiled-in X.509 certificates
Feb 13 18:55:32.474574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3'
Feb 13 18:55:32.474581 kernel: Key type .fscrypt registered
Feb 13 18:55:32.474588 kernel: Key type fscrypt-provisioning registered
Feb 13 18:55:32.474595 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 18:55:32.474602 kernel: ima: Allocated hash algorithm: sha1
Feb 13 18:55:32.474609 kernel: ima: No architecture policies found
Feb 13 18:55:32.474616 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 18:55:32.474623 kernel: clk: Disabling unused clocks
Feb 13 18:55:32.474631 kernel: Freeing unused kernel memory: 39936K
Feb 13 18:55:32.474640 kernel: Run /init as init process
Feb 13 18:55:32.474647 kernel:   with arguments:
Feb 13 18:55:32.474654 kernel:     /init
Feb 13 18:55:32.474661 kernel:   with environment:
Feb 13 18:55:32.474668 kernel:     HOME=/
Feb 13 18:55:32.474676 kernel:     TERM=linux
Feb 13 18:55:32.474683 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 18:55:32.474692 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 18:55:32.474704 systemd[1]: Detected virtualization microsoft.
Feb 13 18:55:32.474712 systemd[1]: Detected architecture arm64.
Feb 13 18:55:32.474719 systemd[1]: Running in initrd.
Feb 13 18:55:32.474727 systemd[1]: No hostname configured, using default hostname.
Feb 13 18:55:32.474734 systemd[1]: Hostname set to <localhost>.
Feb 13 18:55:32.474742 systemd[1]: Initializing machine ID from random generator.
Feb 13 18:55:32.474750 systemd[1]: Queued start job for default target initrd.target.
Feb 13 18:55:32.474757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:55:32.474767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:55:32.476838 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 18:55:32.476852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 18:55:32.476861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 18:55:32.476869 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 18:55:32.476879 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 18:55:32.476895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 18:55:32.476903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:55:32.476910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:55:32.476918 systemd[1]: Reached target paths.target - Path Units.
Feb 13 18:55:32.476926 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 18:55:32.476934 systemd[1]: Reached target swap.target - Swaps.
Feb 13 18:55:32.476942 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 18:55:32.476950 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 18:55:32.476958 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 18:55:32.476968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 18:55:32.476976 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 18:55:32.476984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:55:32.476992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:55:32.477000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:55:32.477008 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 18:55:32.477015 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 18:55:32.477023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 18:55:32.477032 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 18:55:32.477040 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 18:55:32.477048 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 18:55:32.477056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 18:55:32.477095 systemd-journald[218]: Collecting audit messages is disabled.
Feb 13 18:55:32.477118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:32.477127 systemd-journald[218]: Journal started
Feb 13 18:55:32.477151 systemd-journald[218]: Runtime Journal (/run/log/journal/cf1e6ea60ea744c8b73083858fdf5269) is 8.0M, max 78.5M, 70.5M free.
Feb 13 18:55:32.489498 systemd-modules-load[219]: Inserted module 'overlay'
Feb 13 18:55:32.507948 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 18:55:32.520787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 18:55:32.531162 systemd-modules-load[219]: Inserted module 'br_netfilter'
Feb 13 18:55:32.539536 kernel: Bridge firewalling registered
Feb 13 18:55:32.534825 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 18:55:32.548103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:55:32.563080 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 18:55:32.577378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:55:32.590793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:32.616157 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:55:32.626965 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:55:32.660029 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 18:55:32.681512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 18:55:32.699635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:32.712079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:55:32.737461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 18:55:32.748287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:55:32.784316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 18:55:32.799082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 18:55:32.821127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 18:55:32.838184 dracut-cmdline[249]: dracut-dracut-053
Feb 13 18:55:32.845517 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b
Feb 13 18:55:32.848674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:55:32.905062 systemd-resolved[253]: Positive Trust Anchors:
Feb 13 18:55:32.905071 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 18:55:32.905102 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 18:55:32.907899 systemd-resolved[253]: Defaulting to hostname 'linux'.
Feb 13 18:55:32.921566 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 18:55:32.928970 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:55:33.027789 kernel: SCSI subsystem initialized
Feb 13 18:55:33.035797 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 18:55:33.047810 kernel: iscsi: registered transport (tcp)
Feb 13 18:55:33.065336 kernel: iscsi: registered transport (qla4xxx)
Feb 13 18:55:33.065397 kernel: QLogic iSCSI HBA Driver
Feb 13 18:55:33.098874 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 18:55:33.117038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 18:55:33.152412 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 18:55:33.152457 kernel: device-mapper: uevent: version 1.0.3
Feb 13 18:55:33.159402 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 18:55:33.209803 kernel: raid6: neonx8   gen() 15745 MB/s
Feb 13 18:55:33.229790 kernel: raid6: neonx4   gen() 15824 MB/s
Feb 13 18:55:33.249785 kernel: raid6: neonx2   gen() 13309 MB/s
Feb 13 18:55:33.271792 kernel: raid6: neonx1   gen() 10483 MB/s
Feb 13 18:55:33.291786 kernel: raid6: int64x8  gen()  6796 MB/s
Feb 13 18:55:33.311784 kernel: raid6: int64x4  gen()  7350 MB/s
Feb 13 18:55:33.332795 kernel: raid6: int64x2  gen()  6114 MB/s
Feb 13 18:55:33.356877 kernel: raid6: int64x1  gen()  5058 MB/s
Feb 13 18:55:33.356901 kernel: raid6: using algorithm neonx4 gen() 15824 MB/s
Feb 13 18:55:33.384942 kernel: raid6: .... xor() 12427 MB/s, rmw enabled
Feb 13 18:55:33.384960 kernel: raid6: using neon recovery algorithm
Feb 13 18:55:33.398402 kernel: xor: measuring software checksum speed
Feb 13 18:55:33.398430 kernel:    8regs           : 21653 MB/sec
Feb 13 18:55:33.402275 kernel:    32regs          : 21624 MB/sec
Feb 13 18:55:33.406477 kernel:    arm64_neon      : 27889 MB/sec
Feb 13 18:55:33.411548 kernel: xor: using function: arm64_neon (27889 MB/sec)
Feb 13 18:55:33.462816 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 18:55:33.472933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 18:55:33.489903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:55:33.510197 systemd-udevd[436]: Using default interface naming scheme 'v255'.
Feb 13 18:55:33.516524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:55:33.538465 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 18:55:33.554972 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation
Feb 13 18:55:33.584850 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 18:55:33.602047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 18:55:33.642029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:55:33.665002 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 18:55:33.689995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 18:55:33.705609 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 18:55:33.722032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:55:33.738710 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 18:55:33.764793 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 18:55:33.862208 kernel: hv_vmbus: Vmbus version:5.3
Feb 13 18:55:33.862236 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 13 18:55:33.862247 kernel: hv_vmbus: registering driver hyperv_keyboard
Feb 13 18:55:33.862256 kernel: hv_vmbus: registering driver hid_hyperv
Feb 13 18:55:33.862265 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 13 18:55:33.862274 kernel: hv_vmbus: registering driver hv_storvsc
Feb 13 18:55:33.862283 kernel: scsi host0: storvsc_host_t
Feb 13 18:55:33.862317 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0
Feb 13 18:55:33.862328 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Feb 13 18:55:33.862475 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1
Feb 13 18:55:33.862486 kernel: scsi host1: storvsc_host_t
Feb 13 18:55:33.794413 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 18:55:33.912103 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Feb 13 18:55:33.912180 kernel: hv_vmbus: registering driver hv_netvsc
Feb 13 18:55:33.831340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 18:55:33.831496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:33.953425 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Feb 13 18:55:33.903372 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:55:33.919337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:55:33.919615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:33.995252 kernel: PTP clock support registered
Feb 13 18:55:33.946034 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:33.988173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:34.050255 kernel: hv_utils: Registering HyperV Utility Driver
Feb 13 18:55:34.050277 kernel: hv_vmbus: registering driver hv_utils
Feb 13 18:55:34.050286 kernel: hv_utils: Heartbeat IC version 3.0
Feb 13 18:55:34.050295 kernel: hv_utils: Shutdown IC version 3.2
Feb 13 18:55:34.050304 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: VF slot 1 added
Feb 13 18:55:34.055842 kernel: hv_utils: TimeSync IC version 4.0
Feb 13 18:55:34.011559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:55:33.772788 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Feb 13 18:55:33.788571 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 13 18:55:33.788587 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Feb 13 18:55:33.790763 systemd-journald[218]: Time jumped backwards, rotating.
Feb 13 18:55:33.790811 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Feb 13 18:55:33.934008 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Feb 13 18:55:33.941898 kernel: hv_vmbus: registering driver hv_pci
Feb 13 18:55:33.941913 kernel: sd 0:0:0:0: [sda] Write Protect is off
Feb 13 18:55:33.942093 kernel: hv_pci 01766345-0533-4708-999d-ddc432cc38e6: PCI VMBus probing: Using version 0x10004
Feb 13 18:55:34.017895 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Feb 13 18:55:34.018061 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Feb 13 18:55:34.018158 kernel: hv_pci 01766345-0533-4708-999d-ddc432cc38e6: PCI host bridge to bus 0533:00
Feb 13 18:55:34.018245 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 18:55:34.018254 kernel: pci_bus 0533:00: root bus resource [mem 0xfc0000000-0xfc00fffff window]
Feb 13 18:55:34.018350 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Feb 13 18:55:34.018440 kernel: pci_bus 0533:00: No busn resource found for root bus, will use [bus 00-ff]
Feb 13 18:55:34.018542 kernel: pci 0533:00:02.0: [15b3:1018] type 00 class 0x020000
Feb 13 18:55:34.018641 kernel: pci 0533:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 18:55:34.018754 kernel: pci 0533:00:02.0: enabling Extended Tags
Feb 13 18:55:34.018839 kernel: pci 0533:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0533:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
Feb 13 18:55:34.018924 kernel: pci_bus 0533:00: busn_res: [bus 00-ff] end is updated to 00
Feb 13 18:55:34.019007 kernel: pci 0533:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 18:55:34.011692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:33.757060 systemd-resolved[253]: Clock change detected. Flushing caches.
Feb 13 18:55:33.763051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:33.798559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:33.913490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:55:33.993437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:34.086769 kernel: mlx5_core 0533:00:02.0: enabling device (0000 -> 0002)
Feb 13 18:55:34.317534 kernel: mlx5_core 0533:00:02.0: firmware version: 16.30.1284
Feb 13 18:55:34.317665 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: VF registering: eth1
Feb 13 18:55:34.317790 kernel: mlx5_core 0533:00:02.0 eth1: joined to eth0
Feb 13 18:55:34.317890 kernel: mlx5_core 0533:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic)
Feb 13 18:55:34.329751 kernel: mlx5_core 0533:00:02.0 enP1331s1: renamed from eth1
Feb 13 18:55:34.467479 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Feb 13 18:55:34.514974 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (485)
Feb 13 18:55:34.529412 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Feb 13 18:55:34.577998 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494)
Feb 13 18:55:34.585134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Feb 13 18:55:34.598798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Feb 13 18:55:34.609221 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Feb 13 18:55:34.643969 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 18:55:34.674723 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 18:55:35.693739 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 18:55:35.694539 disk-uuid[603]: The operation has completed successfully.
Feb 13 18:55:35.758833 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 18:55:35.758927 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 18:55:35.782075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 18:55:35.797285 sh[689]: Success
Feb 13 18:55:35.829733 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 18:55:36.039365 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 18:55:36.064858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 18:55:36.077872 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 18:55:36.119867 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8
Feb 13 18:55:36.119924 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:36.127920 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 18:55:36.134022 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 18:55:36.138761 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 18:55:36.433458 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 18:55:36.439658 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 18:55:36.466045 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 18:55:36.473909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 18:55:36.519066 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:36.519121 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:36.524029 kernel: BTRFS info (device sda6): using free space tree
Feb 13 18:55:36.545794 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 18:55:36.560274 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 18:55:36.567768 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:36.574615 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 18:55:36.594645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 18:55:36.601942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 18:55:36.635718 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 18:55:36.665047 systemd-networkd[873]: lo: Link UP
Feb 13 18:55:36.665060 systemd-networkd[873]: lo: Gained carrier
Feb 13 18:55:36.667047 systemd-networkd[873]: Enumeration completed
Feb 13 18:55:36.667792 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:55:36.667795 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 18:55:36.669047 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 18:55:36.686111 systemd[1]: Reached target network.target - Network.
Feb 13 18:55:36.731610 kernel: mlx5_core 0533:00:02.0 enP1331s1: Link up
Feb 13 18:55:36.770705 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: Data path switched to VF: enP1331s1
Feb 13 18:55:36.770720 systemd-networkd[873]: enP1331s1: Link UP
Feb 13 18:55:36.770809 systemd-networkd[873]: eth0: Link UP
Feb 13 18:55:36.770908 systemd-networkd[873]: eth0: Gained carrier
Feb 13 18:55:36.770917 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:55:36.798289 systemd-networkd[873]: enP1331s1: Gained carrier
Feb 13 18:55:36.810756 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 18:55:37.358646 ignition[871]: Ignition 2.20.0
Feb 13 18:55:37.358656 ignition[871]: Stage: fetch-offline
Feb 13 18:55:37.360558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 18:55:37.358709 ignition[871]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.381843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 18:55:37.358717 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.358803 ignition[871]: parsed url from cmdline: ""
Feb 13 18:55:37.358807 ignition[871]: no config URL provided
Feb 13 18:55:37.358811 ignition[871]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.358818 ignition[871]: no config at "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.358822 ignition[871]: failed to fetch config: resource requires networking
Feb 13 18:55:37.358993 ignition[871]: Ignition finished successfully
Feb 13 18:55:37.397414 ignition[882]: Ignition 2.20.0
Feb 13 18:55:37.397423 ignition[882]: Stage: fetch
Feb 13 18:55:37.397618 ignition[882]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.397628 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.397753 ignition[882]: parsed url from cmdline: ""
Feb 13 18:55:37.397759 ignition[882]: no config URL provided
Feb 13 18:55:37.397764 ignition[882]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.397772 ignition[882]: no config at "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.397799 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Feb 13 18:55:37.504428 ignition[882]: GET result: OK
Feb 13 18:55:37.504557 ignition[882]: config has been read from IMDS userdata
Feb 13 18:55:37.504601 ignition[882]: parsing config with SHA512: aae8811000a7d67c3c9b180f534625252f091e74a9c5daf18c77d17704f03a94df7b5cc1ffc9786c4e35c909d932023cf8993074023ed57169be3e9c54af71ae
Feb 13 18:55:37.510880 unknown[882]: fetched base config from "system"
Feb 13 18:55:37.511409 ignition[882]: fetch: fetch complete
Feb 13 18:55:37.510894 unknown[882]: fetched base config from "system"
Feb 13 18:55:37.511416 ignition[882]: fetch: fetch passed
Feb 13 18:55:37.510899 unknown[882]: fetched user config from "azure"
Feb 13 18:55:37.511483 ignition[882]: Ignition finished successfully
Feb 13 18:55:37.515177 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 18:55:37.556795 ignition[888]: Ignition 2.20.0
Feb 13 18:55:37.538867 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 18:55:37.556806 ignition[888]: Stage: kargs
Feb 13 18:55:37.563651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 18:55:37.557017 ignition[888]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.557028 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.558210 ignition[888]: kargs: kargs passed
Feb 13 18:55:37.589994 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 18:55:37.558284 ignition[888]: Ignition finished successfully
Feb 13 18:55:37.615354 ignition[895]: Ignition 2.20.0
Feb 13 18:55:37.619906 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 18:55:37.615361 ignition[895]: Stage: disks
Feb 13 18:55:37.626896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 18:55:37.615553 ignition[895]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.636840 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 18:55:37.615563 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.649629 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 18:55:37.616633 ignition[895]: disks: disks passed
Feb 13 18:55:37.658951 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 18:55:37.616715 ignition[895]: Ignition finished successfully
Feb 13 18:55:37.671580 systemd[1]: Reached target basic.target - Basic System.
Feb 13 18:55:37.704925 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 18:55:37.789211 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Feb 13 18:55:37.793292 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 18:55:37.816918 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 18:55:37.875709 kernel: EXT4-fs (sda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none.
Feb 13 18:55:37.876957 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 18:55:37.882278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 18:55:37.928796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 18:55:37.936868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 18:55:37.949837 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Feb 13 18:55:37.966028 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 18:55:38.000260 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915)
Feb 13 18:55:38.000282 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:37.966065 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 18:55:38.028966 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:38.028988 kernel: BTRFS info (device sda6): using free space tree
Feb 13 18:55:37.994354 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 18:55:38.039170 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 18:55:38.039481 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 18:55:38.054465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 18:55:38.504938 systemd-networkd[873]: eth0: Gained IPv6LL
Feb 13 18:55:38.518306 coreos-metadata[917]: Feb 13 18:55:38.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb 13 18:55:38.527905 coreos-metadata[917]: Feb 13 18:55:38.521 INFO Fetch successful
Feb 13 18:55:38.527905 coreos-metadata[917]: Feb 13 18:55:38.521 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Feb 13 18:55:38.544566 coreos-metadata[917]: Feb 13 18:55:38.544 INFO Fetch successful
Feb 13 18:55:38.558752 coreos-metadata[917]: Feb 13 18:55:38.558 INFO wrote hostname ci-4186.1.1-a-21f48afc48 to /sysroot/etc/hostname
Feb 13 18:55:38.561112 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 18:55:38.661150 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 18:55:38.701945 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory
Feb 13 18:55:38.726816 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 18:55:38.753630 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 18:55:38.760864 systemd-networkd[873]: enP1331s1: Gained IPv6LL
Feb 13 18:55:39.624488 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 18:55:39.641942 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 18:55:39.650870 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 18:55:39.676433 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:39.669372 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 18:55:39.701380 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 18:55:39.716594 ignition[1036]: INFO     : Ignition 2.20.0
Feb 13 18:55:39.716594 ignition[1036]: INFO     : Stage: mount
Feb 13 18:55:39.716594 ignition[1036]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:39.716594 ignition[1036]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:39.716594 ignition[1036]: INFO     : mount: mount passed
Feb 13 18:55:39.716594 ignition[1036]: INFO     : Ignition finished successfully
Feb 13 18:55:39.717032 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 18:55:39.746781 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 18:55:39.760930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 18:55:39.796707 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049)
Feb 13 18:55:39.810494 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:39.810531 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:39.815158 kernel: BTRFS info (device sda6): using free space tree
Feb 13 18:55:39.824505 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 18:55:39.825093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 18:55:39.851194 ignition[1066]: INFO     : Ignition 2.20.0
Feb 13 18:55:39.855628 ignition[1066]: INFO     : Stage: files
Feb 13 18:55:39.855628 ignition[1066]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:39.855628 ignition[1066]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:39.855628 ignition[1066]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 18:55:39.878925 ignition[1066]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 18:55:39.878925 ignition[1066]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 18:55:39.938465 ignition[1066]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1
Feb 13 18:55:39.938944 unknown[1066]: wrote ssh authorized keys file for user: core
Feb 13 18:55:40.021095 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 18:55:40.210618 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz"
Feb 13 18:55:40.210618 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 18:55:40.236272 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1
Feb 13 18:55:40.649685 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1
Feb 13 18:55:41.124665 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb 13 18:55:41.348348 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:41.348348 ignition[1066]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(e): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(e): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: createResultFile: createFiles: op(f): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: files passed
Feb 13 18:55:41.371882 ignition[1066]: INFO     : Ignition finished successfully
Feb 13 18:55:41.366540 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 18:55:41.408124 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 18:55:41.415846 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 18:55:41.442646 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 18:55:41.535730 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:55:41.535730 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:55:41.442773 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 18:55:41.576053 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:55:41.451900 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 18:55:41.469520 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 18:55:41.498939 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 18:55:41.537935 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 18:55:41.538059 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 18:55:41.552334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 18:55:41.569257 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 18:55:41.582452 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 18:55:41.599981 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 18:55:41.643170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 18:55:41.660960 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 18:55:41.695362 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 18:55:41.695464 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 18:55:41.708008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:55:41.723113 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:55:41.736091 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 18:55:41.748559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 18:55:41.748629 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 18:55:41.774373 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 18:55:41.785923 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 18:55:41.797415 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 18:55:41.811134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 18:55:41.825640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 18:55:41.838048 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 18:55:41.849856 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 18:55:41.863633 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 18:55:41.876537 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 18:55:41.888004 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 18:55:41.899593 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 18:55:41.899666 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 18:55:41.918097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:55:41.928462 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:55:41.941296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 18:55:41.941337 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:55:41.954266 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 18:55:41.954337 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 18:55:41.972633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 18:55:41.972680 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 18:55:41.980423 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 18:55:41.980469 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 18:55:42.059068 ignition[1118]: INFO     : Ignition 2.20.0
Feb 13 18:55:42.059068 ignition[1118]: INFO     : Stage: umount
Feb 13 18:55:42.059068 ignition[1118]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:42.059068 ignition[1118]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:42.059068 ignition[1118]: INFO     : umount: umount passed
Feb 13 18:55:42.059068 ignition[1118]: INFO     : Ignition finished successfully
Feb 13 18:55:41.990960 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb 13 18:55:41.991000 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 18:55:42.024888 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 18:55:42.044492 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 18:55:42.044578 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:55:42.056828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 18:55:42.066242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 18:55:42.066304 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:55:42.078564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 18:55:42.078615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 18:55:42.108573 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 18:55:42.108679 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 18:55:42.128003 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 18:55:42.128116 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 18:55:42.141732 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 18:55:42.141801 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 18:55:42.153062 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 18:55:42.153110 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 18:55:42.164626 systemd[1]: Stopped target network.target - Network.
Feb 13 18:55:42.182079 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 18:55:42.182165 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 18:55:42.199363 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 18:55:42.212087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 18:55:42.215724 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:55:42.234280 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 18:55:42.246738 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 18:55:42.257924 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 18:55:42.257980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 18:55:42.269919 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 18:55:42.269967 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 18:55:42.282036 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 18:55:42.282094 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 18:55:42.294031 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 18:55:42.294092 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 18:55:42.306856 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 18:55:42.319673 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 18:55:42.333745 systemd-networkd[873]: eth0: DHCPv6 lease lost
Feb 13 18:55:42.334960 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 18:55:42.339603 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 18:55:42.339743 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 18:55:42.348863 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 18:55:42.600560 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: Data path switched from VF: enP1331s1
Feb 13 18:55:42.348954 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 18:55:42.362944 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 18:55:42.363089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 18:55:42.377215 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 18:55:42.377266 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:55:42.392183 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 18:55:42.392246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 18:55:42.428941 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 18:55:42.439041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 18:55:42.439115 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 18:55:42.450920 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 18:55:42.450968 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:55:42.462373 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 18:55:42.462418 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:55:42.474357 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 18:55:42.474407 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:55:42.487139 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:55:42.539613 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 18:55:42.539804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:55:42.551264 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 18:55:42.551312 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:55:42.562247 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 18:55:42.562282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:55:42.574288 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 18:55:42.574338 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 18:55:42.600620 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 18:55:42.600723 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 18:55:42.613797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 18:55:42.613853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:42.660980 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 18:55:42.676232 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 18:55:42.676319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:55:42.693859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:55:42.693916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:42.705984 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 18:55:42.706092 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 18:55:42.901120 systemd-journald[218]: Received SIGTERM from PID 1 (systemd).
Feb 13 18:55:42.717417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 18:55:42.717513 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 18:55:42.730670 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 18:55:42.762940 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 18:55:42.802740 systemd[1]: Switching root.
Feb 13 18:55:42.931243 systemd-journald[218]: Journal stopped
Feb 13 18:55:32.470497 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 18:55:32.470520 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025
Feb 13 18:55:32.470533 kernel: KASLR enabled
Feb 13 18:55:32.470539 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '')
Feb 13 18:55:32.470546 kernel: printk: bootconsole [pl11] enabled
Feb 13 18:55:32.470552 kernel: efi: EFI v2.7 by EDK II
Feb 13 18:55:32.470559 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 
Feb 13 18:55:32.470565 kernel: random: crng init done
Feb 13 18:55:32.470571 kernel: secureboot: Secure boot disabled
Feb 13 18:55:32.470577 kernel: ACPI: Early table checksum verification disabled
Feb 13 18:55:32.470583 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL)
Feb 13 18:55:32.470589 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470595 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470602 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01   00000001 INTL 20230628)
Feb 13 18:55:32.470609 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470618 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470668 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470683 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470690 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470696 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470717 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000)
Feb 13 18:55:32.470724 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb 13 18:55:32.470730 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200
Feb 13 18:55:32.470737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff]
Feb 13 18:55:32.470744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff]
Feb 13 18:55:32.470750 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff]
Feb 13 18:55:32.470757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff]
Feb 13 18:55:32.470763 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff]
Feb 13 18:55:32.472816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff]
Feb 13 18:55:32.472839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff]
Feb 13 18:55:32.472846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff]
Feb 13 18:55:32.472853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff]
Feb 13 18:55:32.472859 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff]
Feb 13 18:55:32.472865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff]
Feb 13 18:55:32.472871 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff]
Feb 13 18:55:32.472878 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff]
Feb 13 18:55:32.472884 kernel: Zone ranges:
Feb 13 18:55:32.472890 kernel:   DMA      [mem 0x0000000000000000-0x00000000ffffffff]
Feb 13 18:55:32.472896 kernel:   DMA32    empty
Feb 13 18:55:32.472903 kernel:   Normal   [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 18:55:32.472918 kernel: Movable zone start for each node
Feb 13 18:55:32.472925 kernel: Early memory node ranges
Feb 13 18:55:32.472932 kernel:   node   0: [mem 0x0000000000000000-0x00000000007fffff]
Feb 13 18:55:32.472938 kernel:   node   0: [mem 0x0000000000824000-0x000000003e45ffff]
Feb 13 18:55:32.472945 kernel:   node   0: [mem 0x000000003e460000-0x000000003e46ffff]
Feb 13 18:55:32.472953 kernel:   node   0: [mem 0x000000003e470000-0x000000003e54ffff]
Feb 13 18:55:32.472960 kernel:   node   0: [mem 0x000000003e550000-0x000000003e87ffff]
Feb 13 18:55:32.472966 kernel:   node   0: [mem 0x000000003e880000-0x000000003fc7ffff]
Feb 13 18:55:32.472973 kernel:   node   0: [mem 0x000000003fc80000-0x000000003fcfffff]
Feb 13 18:55:32.472980 kernel:   node   0: [mem 0x000000003fd00000-0x000000003fffffff]
Feb 13 18:55:32.472986 kernel:   node   0: [mem 0x0000000100000000-0x00000001bfffffff]
Feb 13 18:55:32.472994 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff]
Feb 13 18:55:32.473000 kernel: On node 0, zone DMA: 36 pages in unavailable ranges
Feb 13 18:55:32.473007 kernel: psci: probing for conduit method from ACPI.
Feb 13 18:55:32.473013 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 18:55:32.473020 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 18:55:32.473027 kernel: psci: MIGRATE_INFO_TYPE not supported.
Feb 13 18:55:32.473035 kernel: psci: SMC Calling Convention v1.4
Feb 13 18:55:32.473042 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0
Feb 13 18:55:32.473048 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0
Feb 13 18:55:32.473055 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 18:55:32.473061 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 18:55:32.473068 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 18:55:32.473075 kernel: Detected PIPT I-cache on CPU0
Feb 13 18:55:32.473082 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 18:55:32.473088 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 18:55:32.473095 kernel: CPU features: detected: Spectre-BHB
Feb 13 18:55:32.473101 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 18:55:32.473110 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 18:55:32.473116 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 18:55:32.473123 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion)
Feb 13 18:55:32.473129 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 18:55:32.473136 kernel: alternatives: applying boot alternatives
Feb 13 18:55:32.473144 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b
Feb 13 18:55:32.473151 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 18:55:32.473158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 18:55:32.473164 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 18:55:32.473171 kernel: Fallback order for Node 0: 0 
Feb 13 18:55:32.473177 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1032156
Feb 13 18:55:32.473185 kernel: Policy zone: Normal
Feb 13 18:55:32.473192 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 18:55:32.473198 kernel: software IO TLB: area num 2.
Feb 13 18:55:32.473205 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB)
Feb 13 18:55:32.473212 kernel: Memory: 3982052K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212108K reserved, 0K cma-reserved)
Feb 13 18:55:32.473219 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 18:55:32.473225 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 18:55:32.473232 kernel: rcu:         RCU event tracing is enabled.
Feb 13 18:55:32.473239 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 18:55:32.473246 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 18:55:32.473252 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 18:55:32.473261 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 18:55:32.473268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 18:55:32.473274 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 18:55:32.473280 kernel: GICv3: 960 SPIs implemented
Feb 13 18:55:32.473287 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 18:55:32.473293 kernel: Root IRQ handler: gic_handle_irq
Feb 13 18:55:32.473300 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 18:55:32.473307 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000
Feb 13 18:55:32.473313 kernel: ITS: No ITS available, not enabling LPIs
Feb 13 18:55:32.473320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 18:55:32.473326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 18:55:32.473333 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 18:55:32.473342 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 18:55:32.473348 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 18:55:32.473355 kernel: Console: colour dummy device 80x25
Feb 13 18:55:32.473362 kernel: printk: console [tty1] enabled
Feb 13 18:55:32.473369 kernel: ACPI: Core revision 20230628
Feb 13 18:55:32.473376 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 18:55:32.473383 kernel: pid_max: default: 32768 minimum: 301
Feb 13 18:55:32.473390 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 18:55:32.473396 kernel: landlock: Up and running.
Feb 13 18:55:32.473404 kernel: SELinux:  Initializing.
Feb 13 18:55:32.473411 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.473418 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.473425 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 18:55:32.473432 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 18:55:32.473439 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1
Feb 13 18:55:32.473446 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0
Feb 13 18:55:32.473460 kernel: Hyper-V: enabling crash_kexec_post_notifiers
Feb 13 18:55:32.473467 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 18:55:32.473474 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 18:55:32.473481 kernel: Remapping and enabling EFI services.
Feb 13 18:55:32.473488 kernel: smp: Bringing up secondary CPUs ...
Feb 13 18:55:32.473497 kernel: Detected PIPT I-cache on CPU1
Feb 13 18:55:32.473504 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000
Feb 13 18:55:32.473512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 18:55:32.473519 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 18:55:32.473526 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 18:55:32.473535 kernel: SMP: Total of 2 processors activated.
Feb 13 18:55:32.473542 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 18:55:32.473549 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence
Feb 13 18:55:32.473556 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 18:55:32.473563 kernel: CPU features: detected: CRC32 instructions
Feb 13 18:55:32.473570 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 18:55:32.473578 kernel: CPU features: detected: LSE atomic instructions
Feb 13 18:55:32.473585 kernel: CPU features: detected: Privileged Access Never
Feb 13 18:55:32.473592 kernel: CPU: All CPU(s) started at EL1
Feb 13 18:55:32.473601 kernel: alternatives: applying system-wide alternatives
Feb 13 18:55:32.473608 kernel: devtmpfs: initialized
Feb 13 18:55:32.473615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 18:55:32.473622 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 18:55:32.473630 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 18:55:32.473637 kernel: SMBIOS 3.1.0 present.
Feb 13 18:55:32.473644 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024
Feb 13 18:55:32.473651 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 18:55:32.473658 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 18:55:32.473667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 18:55:32.473674 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 18:55:32.473682 kernel: audit: initializing netlink subsys (disabled)
Feb 13 18:55:32.473689 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1
Feb 13 18:55:32.473696 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 18:55:32.473703 kernel: cpuidle: using governor menu
Feb 13 18:55:32.473710 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 18:55:32.473717 kernel: ASID allocator initialised with 32768 entries
Feb 13 18:55:32.473724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 18:55:32.473733 kernel: Serial: AMBA PL011 UART driver
Feb 13 18:55:32.473740 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 18:55:32.473747 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 18:55:32.473754 kernel: Modules: 508880 pages in range for PLT usage
Feb 13 18:55:32.473761 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473768 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 18:55:32.473786 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 18:55:32.473801 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 18:55:32.473818 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 18:55:32.473825 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 18:55:32.473832 kernel: ACPI: Added _OSI(Module Device)
Feb 13 18:55:32.473840 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 18:55:32.473847 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 18:55:32.473854 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 18:55:32.473861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 18:55:32.473868 kernel: ACPI: Interpreter enabled
Feb 13 18:55:32.473877 kernel: ACPI: Using GIC for interrupt routing
Feb 13 18:55:32.473884 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 18:55:32.473892 kernel: printk: console [ttyAMA0] enabled
Feb 13 18:55:32.473899 kernel: printk: bootconsole [pl11] disabled
Feb 13 18:55:32.473906 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA
Feb 13 18:55:32.473913 kernel: iommu: Default domain type: Translated
Feb 13 18:55:32.473920 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 18:55:32.473927 kernel: efivars: Registered efivars operations
Feb 13 18:55:32.473935 kernel: vgaarb: loaded
Feb 13 18:55:32.473943 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 18:55:32.473951 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 18:55:32.473958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 18:55:32.473965 kernel: pnp: PnP ACPI init
Feb 13 18:55:32.473972 kernel: pnp: PnP ACPI: found 0 devices
Feb 13 18:55:32.473979 kernel: NET: Registered PF_INET protocol family
Feb 13 18:55:32.473987 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 18:55:32.473994 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 18:55:32.474001 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 18:55:32.474010 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 18:55:32.474017 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 18:55:32.474025 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 18:55:32.474032 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.474039 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 18:55:32.474046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 18:55:32.474054 kernel: PCI: CLS 0 bytes, default 64
Feb 13 18:55:32.474061 kernel: kvm [1]: HYP mode not available
Feb 13 18:55:32.474068 kernel: Initialise system trusted keyrings
Feb 13 18:55:32.474076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 18:55:32.474084 kernel: Key type asymmetric registered
Feb 13 18:55:32.474091 kernel: Asymmetric key parser 'x509' registered
Feb 13 18:55:32.474098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 18:55:32.474105 kernel: io scheduler mq-deadline registered
Feb 13 18:55:32.474112 kernel: io scheduler kyber registered
Feb 13 18:55:32.474119 kernel: io scheduler bfq registered
Feb 13 18:55:32.474126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 18:55:32.474134 kernel: thunder_xcv, ver 1.0
Feb 13 18:55:32.474142 kernel: thunder_bgx, ver 1.0
Feb 13 18:55:32.474149 kernel: nicpf, ver 1.0
Feb 13 18:55:32.474156 kernel: nicvf, ver 1.0
Feb 13 18:55:32.474327 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 18:55:32.474405 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:55:31 UTC (1739472931)
Feb 13 18:55:32.474416 kernel: efifb: probing for efifb
Feb 13 18:55:32.474424 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Feb 13 18:55:32.474431 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Feb 13 18:55:32.474441 kernel: efifb: scrolling: redraw
Feb 13 18:55:32.474448 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Feb 13 18:55:32.474456 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 18:55:32.474463 kernel: fb0: EFI VGA frame buffer device
Feb 13 18:55:32.474470 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping ....
Feb 13 18:55:32.474477 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 18:55:32.474484 kernel: No ACPI PMU IRQ for CPU0
Feb 13 18:55:32.474491 kernel: No ACPI PMU IRQ for CPU1
Feb 13 18:55:32.474498 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available
Feb 13 18:55:32.474507 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 18:55:32.474514 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 18:55:32.474522 kernel: NET: Registered PF_INET6 protocol family
Feb 13 18:55:32.474529 kernel: Segment Routing with IPv6
Feb 13 18:55:32.474536 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 18:55:32.474543 kernel: NET: Registered PF_PACKET protocol family
Feb 13 18:55:32.474550 kernel: Key type dns_resolver registered
Feb 13 18:55:32.474557 kernel: registered taskstats version 1
Feb 13 18:55:32.474564 kernel: Loading compiled-in X.509 certificates
Feb 13 18:55:32.474574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3'
Feb 13 18:55:32.474581 kernel: Key type .fscrypt registered
Feb 13 18:55:32.474588 kernel: Key type fscrypt-provisioning registered
Feb 13 18:55:32.474595 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 18:55:32.474602 kernel: ima: Allocated hash algorithm: sha1
Feb 13 18:55:32.474609 kernel: ima: No architecture policies found
Feb 13 18:55:32.474616 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 18:55:32.474623 kernel: clk: Disabling unused clocks
Feb 13 18:55:32.474631 kernel: Freeing unused kernel memory: 39936K
Feb 13 18:55:32.474640 kernel: Run /init as init process
Feb 13 18:55:32.474647 kernel:   with arguments:
Feb 13 18:55:32.474654 kernel:     /init
Feb 13 18:55:32.474661 kernel:   with environment:
Feb 13 18:55:32.474668 kernel:     HOME=/
Feb 13 18:55:32.474676 kernel:     TERM=linux
Feb 13 18:55:32.474683 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 18:55:32.474692 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 18:55:32.474704 systemd[1]: Detected virtualization microsoft.
Feb 13 18:55:32.474712 systemd[1]: Detected architecture arm64.
Feb 13 18:55:32.474719 systemd[1]: Running in initrd.
Feb 13 18:55:32.474727 systemd[1]: No hostname configured, using default hostname.
Feb 13 18:55:32.474734 systemd[1]: Hostname set to <localhost>.
Feb 13 18:55:32.474742 systemd[1]: Initializing machine ID from random generator.
Feb 13 18:55:32.474750 systemd[1]: Queued start job for default target initrd.target.
Feb 13 18:55:32.474757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:55:32.474767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:55:32.476838 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 18:55:32.476852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 18:55:32.476861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 18:55:32.476869 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 18:55:32.476879 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 18:55:32.476895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 18:55:32.476903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:55:32.476910 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:55:32.476918 systemd[1]: Reached target paths.target - Path Units.
Feb 13 18:55:32.476926 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 18:55:32.476934 systemd[1]: Reached target swap.target - Swaps.
Feb 13 18:55:32.476942 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 18:55:32.476950 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 18:55:32.476958 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 18:55:32.476968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 18:55:32.476976 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 18:55:32.476984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:55:32.476992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:55:32.477000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:55:32.477008 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 18:55:32.477015 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 18:55:32.477023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 18:55:32.477032 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 18:55:32.477040 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 18:55:32.477048 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 18:55:32.477056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 18:55:32.477095 systemd-journald[218]: Collecting audit messages is disabled.
Feb 13 18:55:32.477118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:32.477127 systemd-journald[218]: Journal started
Feb 13 18:55:32.477151 systemd-journald[218]: Runtime Journal (/run/log/journal/cf1e6ea60ea744c8b73083858fdf5269) is 8.0M, max 78.5M, 70.5M free.
Feb 13 18:55:32.489498 systemd-modules-load[219]: Inserted module 'overlay'
Feb 13 18:55:32.507948 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 18:55:32.520787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 18:55:32.531162 systemd-modules-load[219]: Inserted module 'br_netfilter'
Feb 13 18:55:32.539536 kernel: Bridge firewalling registered
Feb 13 18:55:32.534825 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 18:55:32.548103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:55:32.563080 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 18:55:32.577378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:55:32.590793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:32.616157 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:55:32.626965 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:55:32.660029 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 18:55:32.681512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 18:55:32.699635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:32.712079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:55:32.737461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 18:55:32.748287 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:55:32.784316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 18:55:32.799082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 18:55:32.821127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 18:55:32.838184 dracut-cmdline[249]: dracut-dracut-053
Feb 13 18:55:32.845517 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b
Feb 13 18:55:32.848674 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:55:32.905062 systemd-resolved[253]: Positive Trust Anchors:
Feb 13 18:55:32.905071 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 18:55:32.905102 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 18:55:32.907899 systemd-resolved[253]: Defaulting to hostname 'linux'.
Feb 13 18:55:32.921566 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 18:55:32.928970 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:55:33.027789 kernel: SCSI subsystem initialized
Feb 13 18:55:33.035797 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 18:55:33.047810 kernel: iscsi: registered transport (tcp)
Feb 13 18:55:33.065336 kernel: iscsi: registered transport (qla4xxx)
Feb 13 18:55:33.065397 kernel: QLogic iSCSI HBA Driver
Feb 13 18:55:33.098874 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 18:55:33.117038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 18:55:33.152412 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 18:55:33.152457 kernel: device-mapper: uevent: version 1.0.3
Feb 13 18:55:33.159402 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 18:55:33.209803 kernel: raid6: neonx8   gen() 15745 MB/s
Feb 13 18:55:33.229790 kernel: raid6: neonx4   gen() 15824 MB/s
Feb 13 18:55:33.249785 kernel: raid6: neonx2   gen() 13309 MB/s
Feb 13 18:55:33.271792 kernel: raid6: neonx1   gen() 10483 MB/s
Feb 13 18:55:33.291786 kernel: raid6: int64x8  gen()  6796 MB/s
Feb 13 18:55:33.311784 kernel: raid6: int64x4  gen()  7350 MB/s
Feb 13 18:55:33.332795 kernel: raid6: int64x2  gen()  6114 MB/s
Feb 13 18:55:33.356877 kernel: raid6: int64x1  gen()  5058 MB/s
Feb 13 18:55:33.356901 kernel: raid6: using algorithm neonx4 gen() 15824 MB/s
Feb 13 18:55:33.384942 kernel: raid6: .... xor() 12427 MB/s, rmw enabled
Feb 13 18:55:33.384960 kernel: raid6: using neon recovery algorithm
Feb 13 18:55:33.398402 kernel: xor: measuring software checksum speed
Feb 13 18:55:33.398430 kernel:    8regs           : 21653 MB/sec
Feb 13 18:55:33.402275 kernel:    32regs          : 21624 MB/sec
Feb 13 18:55:33.406477 kernel:    arm64_neon      : 27889 MB/sec
Feb 13 18:55:33.411548 kernel: xor: using function: arm64_neon (27889 MB/sec)
Feb 13 18:55:33.462816 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 18:55:33.472933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 18:55:33.489903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:55:33.510197 systemd-udevd[436]: Using default interface naming scheme 'v255'.
Feb 13 18:55:33.516524 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:55:33.538465 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 18:55:33.554972 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation
Feb 13 18:55:33.584850 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 18:55:33.602047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 18:55:33.642029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:55:33.665002 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 18:55:33.689995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 18:55:33.705609 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 18:55:33.722032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:55:33.738710 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 18:55:33.764793 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 18:55:33.862208 kernel: hv_vmbus: Vmbus version:5.3
Feb 13 18:55:33.862236 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 13 18:55:33.862247 kernel: hv_vmbus: registering driver hyperv_keyboard
Feb 13 18:55:33.862256 kernel: hv_vmbus: registering driver hid_hyperv
Feb 13 18:55:33.862265 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 13 18:55:33.862274 kernel: hv_vmbus: registering driver hv_storvsc
Feb 13 18:55:33.862283 kernel: scsi host0: storvsc_host_t
Feb 13 18:55:33.862317 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0
Feb 13 18:55:33.862328 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Feb 13 18:55:33.862475 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1
Feb 13 18:55:33.862486 kernel: scsi host1: storvsc_host_t
Feb 13 18:55:33.794413 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 18:55:33.912103 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Feb 13 18:55:33.912180 kernel: hv_vmbus: registering driver hv_netvsc
Feb 13 18:55:33.831340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 18:55:33.831496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:33.953425 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Feb 13 18:55:33.903372 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:55:33.919337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:55:33.919615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:33.995252 kernel: PTP clock support registered
Feb 13 18:55:33.946034 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:33.988173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:34.050255 kernel: hv_utils: Registering HyperV Utility Driver
Feb 13 18:55:34.050277 kernel: hv_vmbus: registering driver hv_utils
Feb 13 18:55:34.050286 kernel: hv_utils: Heartbeat IC version 3.0
Feb 13 18:55:34.050295 kernel: hv_utils: Shutdown IC version 3.2
Feb 13 18:55:34.050304 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: VF slot 1 added
Feb 13 18:55:34.055842 kernel: hv_utils: TimeSync IC version 4.0
Feb 13 18:55:34.011559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:55:33.772788 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Feb 13 18:55:33.788571 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 13 18:55:33.788587 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Feb 13 18:55:33.790763 systemd-journald[218]: Time jumped backwards, rotating.
Feb 13 18:55:33.790811 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Feb 13 18:55:33.934008 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Feb 13 18:55:33.941898 kernel: hv_vmbus: registering driver hv_pci
Feb 13 18:55:33.941913 kernel: sd 0:0:0:0: [sda] Write Protect is off
Feb 13 18:55:33.942093 kernel: hv_pci 01766345-0533-4708-999d-ddc432cc38e6: PCI VMBus probing: Using version 0x10004
Feb 13 18:55:34.017895 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Feb 13 18:55:34.018061 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Feb 13 18:55:34.018158 kernel: hv_pci 01766345-0533-4708-999d-ddc432cc38e6: PCI host bridge to bus 0533:00
Feb 13 18:55:34.018245 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 18:55:34.018254 kernel: pci_bus 0533:00: root bus resource [mem 0xfc0000000-0xfc00fffff window]
Feb 13 18:55:34.018350 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Feb 13 18:55:34.018440 kernel: pci_bus 0533:00: No busn resource found for root bus, will use [bus 00-ff]
Feb 13 18:55:34.018542 kernel: pci 0533:00:02.0: [15b3:1018] type 00 class 0x020000
Feb 13 18:55:34.018641 kernel: pci 0533:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 18:55:34.018754 kernel: pci 0533:00:02.0: enabling Extended Tags
Feb 13 18:55:34.018839 kernel: pci 0533:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0533:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
Feb 13 18:55:34.018924 kernel: pci_bus 0533:00: busn_res: [bus 00-ff] end is updated to 00
Feb 13 18:55:34.019007 kernel: pci 0533:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref]
Feb 13 18:55:34.011692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:33.757060 systemd-resolved[253]: Clock change detected. Flushing caches.
Feb 13 18:55:33.763051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:33.798559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:33.913490 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:55:33.993437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:34.086769 kernel: mlx5_core 0533:00:02.0: enabling device (0000 -> 0002)
Feb 13 18:55:34.317534 kernel: mlx5_core 0533:00:02.0: firmware version: 16.30.1284
Feb 13 18:55:34.317665 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: VF registering: eth1
Feb 13 18:55:34.317790 kernel: mlx5_core 0533:00:02.0 eth1: joined to eth0
Feb 13 18:55:34.317890 kernel: mlx5_core 0533:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic)
Feb 13 18:55:34.329751 kernel: mlx5_core 0533:00:02.0 enP1331s1: renamed from eth1
Feb 13 18:55:34.467479 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM.
Feb 13 18:55:34.514974 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (485)
Feb 13 18:55:34.529412 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Feb 13 18:55:34.577998 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (494)
Feb 13 18:55:34.585134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT.
Feb 13 18:55:34.598798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A.
Feb 13 18:55:34.609221 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A.
Feb 13 18:55:34.643969 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 18:55:34.674723 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 18:55:35.693739 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 18:55:35.694539 disk-uuid[603]: The operation has completed successfully.
Feb 13 18:55:35.758833 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 18:55:35.758927 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 18:55:35.782075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 18:55:35.797285 sh[689]: Success
Feb 13 18:55:35.829733 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 18:55:36.039365 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 18:55:36.064858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 18:55:36.077872 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 18:55:36.119867 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8
Feb 13 18:55:36.119924 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:36.127920 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 18:55:36.134022 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 18:55:36.138761 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 18:55:36.433458 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 18:55:36.439658 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 18:55:36.466045 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 18:55:36.473909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 18:55:36.519066 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:36.519121 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:36.524029 kernel: BTRFS info (device sda6): using free space tree
Feb 13 18:55:36.545794 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 18:55:36.560274 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 18:55:36.567768 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:36.574615 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 18:55:36.594645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 18:55:36.601942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 18:55:36.635718 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 18:55:36.665047 systemd-networkd[873]: lo: Link UP
Feb 13 18:55:36.665060 systemd-networkd[873]: lo: Gained carrier
Feb 13 18:55:36.667047 systemd-networkd[873]: Enumeration completed
Feb 13 18:55:36.667792 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:55:36.667795 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 18:55:36.669047 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 18:55:36.686111 systemd[1]: Reached target network.target - Network.
Feb 13 18:55:36.731610 kernel: mlx5_core 0533:00:02.0 enP1331s1: Link up
Feb 13 18:55:36.770705 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: Data path switched to VF: enP1331s1
Feb 13 18:55:36.770720 systemd-networkd[873]: enP1331s1: Link UP
Feb 13 18:55:36.770809 systemd-networkd[873]: eth0: Link UP
Feb 13 18:55:36.770908 systemd-networkd[873]: eth0: Gained carrier
Feb 13 18:55:36.770917 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:55:36.798289 systemd-networkd[873]: enP1331s1: Gained carrier
Feb 13 18:55:36.810756 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 18:55:37.358646 ignition[871]: Ignition 2.20.0
Feb 13 18:55:37.358656 ignition[871]: Stage: fetch-offline
Feb 13 18:55:37.360558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 18:55:37.358709 ignition[871]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.381843 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 18:55:37.358717 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.358803 ignition[871]: parsed url from cmdline: ""
Feb 13 18:55:37.358807 ignition[871]: no config URL provided
Feb 13 18:55:37.358811 ignition[871]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.358818 ignition[871]: no config at "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.358822 ignition[871]: failed to fetch config: resource requires networking
Feb 13 18:55:37.358993 ignition[871]: Ignition finished successfully
Feb 13 18:55:37.397414 ignition[882]: Ignition 2.20.0
Feb 13 18:55:37.397423 ignition[882]: Stage: fetch
Feb 13 18:55:37.397618 ignition[882]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.397628 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.397753 ignition[882]: parsed url from cmdline: ""
Feb 13 18:55:37.397759 ignition[882]: no config URL provided
Feb 13 18:55:37.397764 ignition[882]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.397772 ignition[882]: no config at "/usr/lib/ignition/user.ign"
Feb 13 18:55:37.397799 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Feb 13 18:55:37.504428 ignition[882]: GET result: OK
Feb 13 18:55:37.504557 ignition[882]: config has been read from IMDS userdata
Feb 13 18:55:37.504601 ignition[882]: parsing config with SHA512: aae8811000a7d67c3c9b180f534625252f091e74a9c5daf18c77d17704f03a94df7b5cc1ffc9786c4e35c909d932023cf8993074023ed57169be3e9c54af71ae
Feb 13 18:55:37.510880 unknown[882]: fetched base config from "system"
Feb 13 18:55:37.511409 ignition[882]: fetch: fetch complete
Feb 13 18:55:37.510894 unknown[882]: fetched base config from "system"
Feb 13 18:55:37.511416 ignition[882]: fetch: fetch passed
Feb 13 18:55:37.510899 unknown[882]: fetched user config from "azure"
Feb 13 18:55:37.511483 ignition[882]: Ignition finished successfully
Feb 13 18:55:37.515177 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 18:55:37.556795 ignition[888]: Ignition 2.20.0
Feb 13 18:55:37.538867 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 18:55:37.556806 ignition[888]: Stage: kargs
Feb 13 18:55:37.563651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 18:55:37.557017 ignition[888]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.557028 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.558210 ignition[888]: kargs: kargs passed
Feb 13 18:55:37.589994 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 18:55:37.558284 ignition[888]: Ignition finished successfully
Feb 13 18:55:37.615354 ignition[895]: Ignition 2.20.0
Feb 13 18:55:37.619906 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 18:55:37.615361 ignition[895]: Stage: disks
Feb 13 18:55:37.626896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 18:55:37.615553 ignition[895]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:37.636840 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 18:55:37.615563 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:37.649629 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 18:55:37.616633 ignition[895]: disks: disks passed
Feb 13 18:55:37.658951 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 18:55:37.616715 ignition[895]: Ignition finished successfully
Feb 13 18:55:37.671580 systemd[1]: Reached target basic.target - Basic System.
Feb 13 18:55:37.704925 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 18:55:37.789211 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks
Feb 13 18:55:37.793292 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 18:55:37.816918 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 18:55:37.875709 kernel: EXT4-fs (sda9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none.
Feb 13 18:55:37.876957 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 18:55:37.882278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 18:55:37.928796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 18:55:37.936868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 18:55:37.949837 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent...
Feb 13 18:55:37.966028 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 18:55:38.000260 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915)
Feb 13 18:55:38.000282 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:37.966065 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 18:55:38.028966 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:38.028988 kernel: BTRFS info (device sda6): using free space tree
Feb 13 18:55:37.994354 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 18:55:38.039170 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 18:55:38.039481 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 18:55:38.054465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 18:55:38.504938 systemd-networkd[873]: eth0: Gained IPv6LL
Feb 13 18:55:38.518306 coreos-metadata[917]: Feb 13 18:55:38.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb 13 18:55:38.527905 coreos-metadata[917]: Feb 13 18:55:38.521 INFO Fetch successful
Feb 13 18:55:38.527905 coreos-metadata[917]: Feb 13 18:55:38.521 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Feb 13 18:55:38.544566 coreos-metadata[917]: Feb 13 18:55:38.544 INFO Fetch successful
Feb 13 18:55:38.558752 coreos-metadata[917]: Feb 13 18:55:38.558 INFO wrote hostname ci-4186.1.1-a-21f48afc48 to /sysroot/etc/hostname
Feb 13 18:55:38.561112 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 18:55:38.661150 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 18:55:38.701945 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory
Feb 13 18:55:38.726816 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 18:55:38.753630 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 18:55:38.760864 systemd-networkd[873]: enP1331s1: Gained IPv6LL
Feb 13 18:55:39.624488 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 18:55:39.641942 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 18:55:39.650870 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 18:55:39.676433 kernel: BTRFS info (device sda6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:39.669372 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 18:55:39.701380 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 18:55:39.716594 ignition[1036]: INFO     : Ignition 2.20.0
Feb 13 18:55:39.716594 ignition[1036]: INFO     : Stage: mount
Feb 13 18:55:39.716594 ignition[1036]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:39.716594 ignition[1036]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:39.716594 ignition[1036]: INFO     : mount: mount passed
Feb 13 18:55:39.716594 ignition[1036]: INFO     : Ignition finished successfully
Feb 13 18:55:39.717032 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 18:55:39.746781 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 18:55:39.760930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 18:55:39.796707 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1049)
Feb 13 18:55:39.810494 kernel: BTRFS info (device sda6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:55:39.810531 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:55:39.815158 kernel: BTRFS info (device sda6): using free space tree
Feb 13 18:55:39.824505 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 18:55:39.825093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 18:55:39.851194 ignition[1066]: INFO     : Ignition 2.20.0
Feb 13 18:55:39.855628 ignition[1066]: INFO     : Stage: files
Feb 13 18:55:39.855628 ignition[1066]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:39.855628 ignition[1066]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:39.855628 ignition[1066]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 18:55:39.878925 ignition[1066]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 18:55:39.878925 ignition[1066]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 18:55:39.938465 ignition[1066]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz"
Feb 13 18:55:39.946779 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1
Feb 13 18:55:39.938944 unknown[1066]: wrote ssh authorized keys file for user: core
Feb 13 18:55:40.021095 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 18:55:40.210618 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz"
Feb 13 18:55:40.210618 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 18:55:40.236272 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1
Feb 13 18:55:40.649685 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 18:55:40.723311 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:40.805697 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1
Feb 13 18:55:41.124665 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb 13 18:55:41.348348 ignition[1066]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 18:55:41.348348 ignition[1066]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(e): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: op(e): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: createResultFile: createFiles: op(f): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 18:55:41.371882 ignition[1066]: INFO     : files: files passed
Feb 13 18:55:41.371882 ignition[1066]: INFO     : Ignition finished successfully
Feb 13 18:55:41.366540 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 18:55:41.408124 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 18:55:41.415846 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 18:55:41.442646 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 18:55:41.535730 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:55:41.535730 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:55:41.442773 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 18:55:41.576053 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:55:41.451900 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 18:55:41.469520 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 18:55:41.498939 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 18:55:41.537935 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 18:55:41.538059 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 18:55:41.552334 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 18:55:41.569257 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 18:55:41.582452 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 18:55:41.599981 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 18:55:41.643170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 18:55:41.660960 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 18:55:41.695362 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 18:55:41.695464 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 18:55:41.708008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:55:41.723113 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:55:41.736091 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 18:55:41.748559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 18:55:41.748629 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 18:55:41.774373 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 18:55:41.785923 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 18:55:41.797415 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 18:55:41.811134 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 18:55:41.825640 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 18:55:41.838048 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 18:55:41.849856 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 18:55:41.863633 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 18:55:41.876537 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 18:55:41.888004 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 18:55:41.899593 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 18:55:41.899666 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 18:55:41.918097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:55:41.928462 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:55:41.941296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 18:55:41.941337 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:55:41.954266 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 18:55:41.954337 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 18:55:41.972633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 18:55:41.972680 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 18:55:41.980423 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 18:55:41.980469 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 18:55:42.059068 ignition[1118]: INFO     : Ignition 2.20.0
Feb 13 18:55:42.059068 ignition[1118]: INFO     : Stage: umount
Feb 13 18:55:42.059068 ignition[1118]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:55:42.059068 ignition[1118]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb 13 18:55:42.059068 ignition[1118]: INFO     : umount: umount passed
Feb 13 18:55:42.059068 ignition[1118]: INFO     : Ignition finished successfully
Feb 13 18:55:41.990960 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb 13 18:55:41.991000 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent.
Feb 13 18:55:42.024888 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 18:55:42.044492 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 18:55:42.044578 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:55:42.056828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 18:55:42.066242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 18:55:42.066304 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:55:42.078564 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 18:55:42.078615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 18:55:42.108573 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 18:55:42.108679 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 18:55:42.128003 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 18:55:42.128116 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 18:55:42.141732 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 18:55:42.141801 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 18:55:42.153062 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 18:55:42.153110 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 18:55:42.164626 systemd[1]: Stopped target network.target - Network.
Feb 13 18:55:42.182079 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 18:55:42.182165 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 18:55:42.199363 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 18:55:42.212087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 18:55:42.215724 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:55:42.234280 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 18:55:42.246738 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 18:55:42.257924 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 18:55:42.257980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 18:55:42.269919 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 18:55:42.269967 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 18:55:42.282036 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 18:55:42.282094 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 18:55:42.294031 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 18:55:42.294092 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 18:55:42.306856 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 18:55:42.319673 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 18:55:42.333745 systemd-networkd[873]: eth0: DHCPv6 lease lost
Feb 13 18:55:42.334960 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 18:55:42.339603 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 18:55:42.339743 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 18:55:42.348863 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 18:55:42.600560 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: Data path switched from VF: enP1331s1
Feb 13 18:55:42.348954 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 18:55:42.362944 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 18:55:42.363089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 18:55:42.377215 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 18:55:42.377266 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:55:42.392183 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 18:55:42.392246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 18:55:42.428941 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 18:55:42.439041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 18:55:42.439115 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 18:55:42.450920 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 18:55:42.450968 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:55:42.462373 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 18:55:42.462418 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:55:42.474357 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 18:55:42.474407 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:55:42.487139 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:55:42.539613 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 18:55:42.539804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:55:42.551264 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 18:55:42.551312 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:55:42.562247 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 18:55:42.562282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:55:42.574288 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 18:55:42.574338 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 18:55:42.600620 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 18:55:42.600723 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 18:55:42.613797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 18:55:42.613853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:55:42.660980 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 18:55:42.676232 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 18:55:42.676319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:55:42.693859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:55:42.693916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:42.705984 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 18:55:42.706092 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 18:55:42.901120 systemd-journald[218]: Received SIGTERM from PID 1 (systemd).
Feb 13 18:55:42.717417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 18:55:42.717513 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 18:55:42.730670 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 18:55:42.762940 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 18:55:42.802740 systemd[1]: Switching root.
Feb 13 18:55:42.931243 systemd-journald[218]: Journal stopped
Feb 13 18:55:47.457110 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 18:55:47.457136 kernel: SELinux:  policy capability open_perms=1
Feb 13 18:55:47.457146 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 18:55:47.457153 kernel: SELinux:  policy capability always_check_network=0
Feb 13 18:55:47.457163 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 18:55:47.457171 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 18:55:47.457180 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 18:55:47.457187 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 18:55:47.457197 kernel: audit: type=1403 audit(1739472944.339:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 18:55:47.457207 systemd[1]: Successfully loaded SELinux policy in 139.768ms.
Feb 13 18:55:47.457218 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.677ms.
Feb 13 18:55:47.457228 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 18:55:47.457237 systemd[1]: Detected virtualization microsoft.
Feb 13 18:55:47.457245 systemd[1]: Detected architecture arm64.
Feb 13 18:55:47.457255 systemd[1]: Detected first boot.
Feb 13 18:55:47.457265 systemd[1]: Hostname set to <ci-4186.1.1-a-21f48afc48>.
Feb 13 18:55:47.457274 systemd[1]: Initializing machine ID from random generator.
Feb 13 18:55:47.457283 zram_generator::config[1161]: No configuration found.
Feb 13 18:55:47.457292 systemd[1]: Populated /etc with preset unit settings.
Feb 13 18:55:47.457301 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 18:55:47.457310 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 18:55:47.457319 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 18:55:47.457330 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 18:55:47.457339 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 18:55:47.457348 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 18:55:47.457357 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 18:55:47.457366 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 18:55:47.457376 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 18:55:47.457385 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 18:55:47.457396 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 18:55:47.457405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:55:47.457414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:55:47.457423 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 18:55:47.457432 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 18:55:47.457441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 18:55:47.457450 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 18:55:47.457459 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Feb 13 18:55:47.457470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:55:47.457479 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 18:55:47.457488 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 18:55:47.457499 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 18:55:47.457508 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 18:55:47.457518 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:55:47.457527 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 18:55:47.457536 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 18:55:47.457546 systemd[1]: Reached target swap.target - Swaps.
Feb 13 18:55:47.457556 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 18:55:47.457565 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 18:55:47.457574 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:55:47.457583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:55:47.457594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:55:47.457605 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 18:55:47.457614 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 18:55:47.457624 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 18:55:47.457633 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 18:55:47.457642 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 18:55:47.457651 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 18:55:47.457661 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 18:55:47.457672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 18:55:47.457681 systemd[1]: Reached target machines.target - Containers.
Feb 13 18:55:47.458762 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 18:55:47.458787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 18:55:47.458797 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 18:55:47.458807 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 18:55:47.458817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 18:55:47.458826 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 18:55:47.458841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 18:55:47.458850 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 18:55:47.458860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 18:55:47.458870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 18:55:47.458879 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 18:55:47.458888 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 18:55:47.458897 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 18:55:47.458907 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 18:55:47.458917 kernel: loop: module loaded
Feb 13 18:55:47.458926 kernel: fuse: init (API version 7.39)
Feb 13 18:55:47.458935 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 18:55:47.458944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 18:55:47.458953 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 18:55:47.458963 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 18:55:47.458972 kernel: ACPI: bus type drm_connector registered
Feb 13 18:55:47.459008 systemd-journald[1264]: Collecting audit messages is disabled.
Feb 13 18:55:47.459032 systemd-journald[1264]: Journal started
Feb 13 18:55:47.459061 systemd-journald[1264]: Runtime Journal (/run/log/journal/7913fa2609b341a28727e5c16adb4861) is 8.0M, max 78.5M, 70.5M free.
Feb 13 18:55:46.350566 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 18:55:46.460600 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Feb 13 18:55:46.461091 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 18:55:46.461414 systemd[1]: systemd-journald.service: Consumed 3.613s CPU time.
Feb 13 18:55:47.483923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 18:55:47.493630 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 18:55:47.493695 systemd[1]: Stopped verity-setup.service.
Feb 13 18:55:47.512119 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 18:55:47.512961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 18:55:47.521726 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 18:55:47.529010 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 18:55:47.535179 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 18:55:47.541520 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 18:55:47.548442 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 18:55:47.554624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 18:55:47.563390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:55:47.572060 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 18:55:47.572213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 18:55:47.579943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 18:55:47.580788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 18:55:47.587825 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 18:55:47.587962 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 18:55:47.594291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 18:55:47.594423 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 18:55:47.602024 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 18:55:47.602156 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 18:55:47.610247 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 18:55:47.610384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 18:55:47.617667 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:55:47.625336 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 18:55:47.633037 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 18:55:47.640564 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:55:47.658272 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 18:55:47.668782 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 18:55:47.683850 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 18:55:47.690432 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 18:55:47.690476 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 18:55:47.697068 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 18:55:47.705516 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 18:55:47.713286 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 18:55:47.719243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 18:55:47.721761 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 18:55:47.730928 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 18:55:47.739269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 18:55:47.740650 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 18:55:47.747841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 18:55:47.749154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:55:47.756944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 18:55:47.778170 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 18:55:47.791586 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 18:55:47.806916 kernel: loop0: detected capacity change from 0 to 28752
Feb 13 18:55:47.810267 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 18:55:47.824648 systemd-journald[1264]: Time spent on flushing to /var/log/journal/7913fa2609b341a28727e5c16adb4861 is 75.444ms for 905 entries.
Feb 13 18:55:47.824648 systemd-journald[1264]: System Journal (/var/log/journal/7913fa2609b341a28727e5c16adb4861) is 11.8M, max 2.6G, 2.6G free.
Feb 13 18:55:47.953232 systemd-journald[1264]: Received client request to flush runtime journal.
Feb 13 18:55:47.953282 systemd-journald[1264]: /var/log/journal/7913fa2609b341a28727e5c16adb4861/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating.
Feb 13 18:55:47.953308 systemd-journald[1264]: Rotating system journal.
Feb 13 18:55:47.832344 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 18:55:47.841741 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 18:55:47.854117 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 18:55:47.884725 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 18:55:47.911008 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 18:55:47.927515 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:55:47.936607 udevadm[1299]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 13 18:55:47.956181 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 18:55:47.992238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 18:55:47.992856 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 18:55:48.207716 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 18:55:48.266461 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 18:55:48.281882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 18:55:48.296725 kernel: loop1: detected capacity change from 0 to 116784
Feb 13 18:55:48.363468 systemd-tmpfiles[1316]: ACLs are not supported, ignoring.
Feb 13 18:55:48.363852 systemd-tmpfiles[1316]: ACLs are not supported, ignoring.
Feb 13 18:55:48.368384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:55:48.634764 kernel: loop2: detected capacity change from 0 to 201592
Feb 13 18:55:48.685717 kernel: loop3: detected capacity change from 0 to 113552
Feb 13 18:55:48.924067 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 18:55:48.937877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:55:48.960370 systemd-udevd[1322]: Using default interface naming scheme 'v255'.
Feb 13 18:55:49.022724 kernel: loop4: detected capacity change from 0 to 28752
Feb 13 18:55:49.041826 kernel: loop5: detected capacity change from 0 to 116784
Feb 13 18:55:49.046320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:55:49.070721 kernel: loop6: detected capacity change from 0 to 201592
Feb 13 18:55:49.080235 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 18:55:49.093768 kernel: loop7: detected capacity change from 0 to 113552
Feb 13 18:55:49.102518 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'.
Feb 13 18:55:49.102982 (sd-merge)[1324]: Merged extensions into '/usr'.
Feb 13 18:55:49.108762 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 18:55:49.108778 systemd[1]: Reloading...
Feb 13 18:55:49.237552 zram_generator::config[1373]: No configuration found.
Feb 13 18:55:49.295753 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 18:55:49.342423 kernel: hv_vmbus: registering driver hv_balloon
Feb 13 18:55:49.342525 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0
Feb 13 18:55:49.347412 kernel: hv_balloon: Memory hot add disabled on ARM64
Feb 13 18:55:49.387885 kernel: hv_vmbus: registering driver hyperv_fb
Feb 13 18:55:49.387980 kernel: hyperv_fb: Synthvid Version major 3, minor 5
Feb 13 18:55:49.407712 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608
Feb 13 18:55:49.407811 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1328)
Feb 13 18:55:49.407837 kernel: Console: switching to colour dummy device 80x25
Feb 13 18:55:49.421712 kernel: Console: switching to colour frame buffer device 128x48
Feb 13 18:55:49.465008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:55:49.544128 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Feb 13 18:55:49.544415 systemd[1]: Reloading finished in 435 ms.
Feb 13 18:55:49.569383 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 18:55:49.605735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM.
Feb 13 18:55:49.624067 systemd[1]: Starting ensure-sysext.service...
Feb 13 18:55:49.630887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 18:55:49.643872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 18:55:49.659085 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 18:55:49.678940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:55:49.686418 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 18:55:49.695858 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 18:55:49.696109 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 18:55:49.696864 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 18:55:49.697077 systemd-tmpfiles[1506]: ACLs are not supported, ignoring.
Feb 13 18:55:49.697123 systemd-tmpfiles[1506]: ACLs are not supported, ignoring.
Feb 13 18:55:49.697150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 18:55:49.721081 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 18:55:49.721509 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 18:55:49.722889 systemd-tmpfiles[1506]: Skipping /boot
Feb 13 18:55:49.735029 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 18:55:49.735743 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 18:55:49.735845 systemd-tmpfiles[1506]: Skipping /boot
Feb 13 18:55:49.747609 systemd[1]: Reloading requested from client PID 1504 ('systemctl') (unit ensure-sysext.service)...
Feb 13 18:55:49.747626 systemd[1]: Reloading...
Feb 13 18:55:49.794411 lvm[1514]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 18:55:49.827727 zram_generator::config[1545]: No configuration found.
Feb 13 18:55:49.868126 systemd-networkd[1337]: lo: Link UP
Feb 13 18:55:49.868136 systemd-networkd[1337]: lo: Gained carrier
Feb 13 18:55:49.870605 systemd-networkd[1337]: Enumeration completed
Feb 13 18:55:49.871452 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:55:49.871548 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 18:55:49.921710 kernel: mlx5_core 0533:00:02.0 enP1331s1: Link up
Feb 13 18:55:49.949393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:55:49.949866 kernel: hv_netvsc 002248bb-29e8-0022-48bb-29e8002248bb eth0: Data path switched to VF: enP1331s1
Feb 13 18:55:49.950533 systemd-networkd[1337]: enP1331s1: Link UP
Feb 13 18:55:49.950671 systemd-networkd[1337]: eth0: Link UP
Feb 13 18:55:49.950675 systemd-networkd[1337]: eth0: Gained carrier
Feb 13 18:55:49.950700 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:55:49.954995 systemd-networkd[1337]: enP1331s1: Gained carrier
Feb 13 18:55:49.960747 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 18:55:50.029053 systemd[1]: Reloading finished in 281 ms.
Feb 13 18:55:50.049068 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 18:55:50.064206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:55:50.074498 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 18:55:50.088484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:55:50.101003 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 18:55:50.107812 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 18:55:50.116995 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 18:55:50.127070 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 18:55:50.129405 lvm[1609]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 18:55:50.144019 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 18:55:50.153652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 18:55:50.161872 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 18:55:50.174089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 18:55:50.182657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 18:55:50.195945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 18:55:50.204996 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 18:55:50.212840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 18:55:50.215886 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 18:55:50.228083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 18:55:50.229832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 18:55:50.239437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 18:55:50.239988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 18:55:50.253493 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 18:55:50.270356 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 18:55:50.270809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 18:55:50.288773 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 18:55:50.301569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 18:55:50.308605 systemd-resolved[1613]: Positive Trust Anchors:
Feb 13 18:55:50.308630 systemd-resolved[1613]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 18:55:50.308662 systemd-resolved[1613]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 18:55:50.312005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 18:55:50.322796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 18:55:50.334029 systemd-resolved[1613]: Using system hostname 'ci-4186.1.1-a-21f48afc48'.
Feb 13 18:55:50.340806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 18:55:50.347787 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 18:55:50.351260 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 18:55:50.360574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 18:55:50.360769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 18:55:50.368480 augenrules[1648]: No rules
Feb 13 18:55:50.370714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:55:50.380935 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 18:55:50.381146 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 18:55:50.389129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 18:55:50.389303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 18:55:50.397553 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 18:55:50.397686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 18:55:50.411380 systemd[1]: Reached target network.target - Network.
Feb 13 18:55:50.416891 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:55:50.435989 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 18:55:50.444167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 18:55:50.447005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 18:55:50.459570 augenrules[1658]: /sbin/augenrules: No change
Feb 13 18:55:50.462562 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 18:55:50.476000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 18:55:50.476245 augenrules[1678]: No rules
Feb 13 18:55:50.486040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 18:55:50.492747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 18:55:50.493096 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 18:55:50.501470 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 18:55:50.501750 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 18:55:50.508063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 18:55:50.508307 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 18:55:50.517380 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 18:55:50.517615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 18:55:50.525598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 18:55:50.525965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 18:55:50.534408 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 18:55:50.534644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 18:55:50.544933 systemd[1]: Finished ensure-sysext.service.
Feb 13 18:55:50.554991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 18:55:50.555176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 18:55:50.630701 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 18:55:50.639113 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 18:55:51.112938 systemd-networkd[1337]: enP1331s1: Gained IPv6LL
Feb 13 18:55:51.880963 systemd-networkd[1337]: eth0: Gained IPv6LL
Feb 13 18:55:51.883926 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 18:55:51.892301 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 18:55:54.998306 ldconfig[1290]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 18:55:56.322623 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 18:55:56.342910 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 18:55:56.352341 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 18:55:56.359651 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 18:55:56.367327 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 18:55:56.377011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 18:55:56.385542 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 18:55:56.392789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 18:55:56.400789 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 18:55:56.408468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 18:55:56.408506 systemd[1]: Reached target paths.target - Path Units.
Feb 13 18:55:56.414223 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 18:55:56.421474 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 18:55:56.430401 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 18:55:56.441412 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 18:55:56.449640 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 18:55:56.457568 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 18:55:56.463704 systemd[1]: Reached target basic.target - Basic System.
Feb 13 18:55:56.472238 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 18:55:56.472268 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 18:55:56.483808 systemd[1]: Starting chronyd.service - NTP client/server...
Feb 13 18:55:56.493860 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 18:55:56.506919 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 18:55:56.518346 (chronyd)[1697]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS
Feb 13 18:55:56.519242 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 18:55:56.527285 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 18:55:56.537929 jq[1704]: false
Feb 13 18:55:56.538919 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 18:55:56.547234 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 18:55:56.547357 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy).
Feb 13 18:55:56.548963 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon.
Feb 13 18:55:56.557644 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss).
Feb 13 18:55:56.559076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:55:56.569413 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 18:55:56.584525 KVP[1706]: KVP starting; pid is:1706
Feb 13 18:55:56.588042 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 18:55:56.599878 chronyd[1718]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
Feb 13 18:55:56.603762 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 18:55:56.612892 chronyd[1718]: Timezone right/UTC failed leap second check, ignoring
Feb 13 18:55:56.613140 chronyd[1718]: Loaded seccomp filter (level 2)
Feb 13 18:55:56.621157 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 18:55:56.629775 kernel: hv_utils: KVP IC version 4.0
Feb 13 18:55:56.629898 KVP[1706]: KVP LIC Version: 3.1
Feb 13 18:55:56.632191 extend-filesystems[1705]: Found loop4
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found loop5
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found loop6
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found loop7
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda1
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda2
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda3
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found usr
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda4
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda6
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda7
Feb 13 18:55:56.641848 extend-filesystems[1705]: Found sda9
Feb 13 18:55:56.641848 extend-filesystems[1705]: Checking size of /dev/sda9
Feb 13 18:55:56.635932 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 18:55:56.713552 dbus-daemon[1700]: [system] SELinux support is enabled
Feb 13 18:55:56.855984 extend-filesystems[1705]: Old size kept for /dev/sda9
Feb 13 18:55:56.855984 extend-filesystems[1705]: Found sr0
Feb 13 18:55:56.678034 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.854 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.862 INFO Fetch successful
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.864 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.875 INFO Fetch successful
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.875 INFO Fetching http://168.63.129.16/machine/0b2911bc-2181-4d3f-bee9-2b3294ee4188/41b41dc2%2D1a1e%2D4d93%2Dae6b%2D5692c3e5d8bb.%5Fci%2D4186.1.1%2Da%2D21f48afc48?comp=config&type=sharedConfig&incarnation=1: Attempt #1
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.885 INFO Fetch successful
Feb 13 18:55:56.893153 coreos-metadata[1699]: Feb 13 18:55:56.885 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1
Feb 13 18:55:56.702224 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 18:55:56.893591 update_engine[1735]: I20250213 18:55:56.833659  1735 main.cc:92] Flatcar Update Engine starting
Feb 13 18:55:56.893591 update_engine[1735]: I20250213 18:55:56.842202  1735 update_check_scheduler.cc:74] Next update check in 11m55s
Feb 13 18:55:56.702757 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 18:55:56.893873 jq[1739]: true
Feb 13 18:55:56.711881 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 18:55:56.742132 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 18:55:56.770677 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 18:55:56.780451 systemd[1]: Started chronyd.service - NTP client/server.
Feb 13 18:55:56.796193 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 18:55:56.796393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 18:55:56.796669 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 18:55:56.796825 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 18:55:56.820323 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 18:55:56.820481 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 18:55:56.834462 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 18:55:56.855612 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 18:55:56.861838 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 18:55:56.920245 coreos-metadata[1699]: Feb 13 18:55:56.905 INFO Fetch successful
Feb 13 18:55:56.908368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 18:55:56.908396 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 18:55:56.913188 (ntainerd)[1761]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 18:55:56.923162 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 18:55:56.923182 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 18:55:56.941089 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 18:55:56.949468 jq[1758]: true
Feb 13 18:55:56.956430 tar[1754]: linux-arm64/LICENSE
Feb 13 18:55:56.956430 tar[1754]: linux-arm64/helm
Feb 13 18:55:56.960597 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 18:55:56.975176 systemd-logind[1729]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 13 18:55:56.977897 systemd-logind[1729]: New seat seat0.
Feb 13 18:55:56.980778 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 18:55:57.011773 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1751)
Feb 13 18:55:57.051270 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 18:55:57.064074 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 18:55:57.128130 bash[1818]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 18:55:57.131915 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 18:55:57.147924 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Feb 13 18:55:57.386883 locksmithd[1779]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 18:55:57.594293 containerd[1761]: time="2025-02-13T18:55:57.591214780Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 18:55:57.671116 containerd[1761]: time="2025-02-13T18:55:57.671015860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.676989 containerd[1761]: time="2025-02-13T18:55:57.676935500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:55:57.677199 containerd[1761]: time="2025-02-13T18:55:57.677179180Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678072860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678241780Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678259420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678322340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678335820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678510780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678524020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678536860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678546620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.678986 containerd[1761]: time="2025-02-13T18:55:57.678622700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.681969 containerd[1761]: time="2025-02-13T18:55:57.681932860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:55:57.683171 containerd[1761]: time="2025-02-13T18:55:57.682876180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:55:57.683171 containerd[1761]: time="2025-02-13T18:55:57.682905380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 18:55:57.683171 containerd[1761]: time="2025-02-13T18:55:57.683009620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 18:55:57.683171 containerd[1761]: time="2025-02-13T18:55:57.683051980Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 18:55:57.698720 containerd[1761]: time="2025-02-13T18:55:57.698363820Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 18:55:57.698720 containerd[1761]: time="2025-02-13T18:55:57.698427260Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 18:55:57.698720 containerd[1761]: time="2025-02-13T18:55:57.698442860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 18:55:57.698720 containerd[1761]: time="2025-02-13T18:55:57.698459500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 18:55:57.698720 containerd[1761]: time="2025-02-13T18:55:57.698473380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 18:55:57.698720 containerd[1761]: time="2025-02-13T18:55:57.698650980Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.700838980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701035820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701055780Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701072100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701089540Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701102820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701135300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701150140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701167260Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701180060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701192380Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701204940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701229940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702603 containerd[1761]: time="2025-02-13T18:55:57.701243380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701261460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701274900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701289980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701304020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701316260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701329820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701342180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701356300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701367940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701379380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701391780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701406340Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701428180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701440620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.702962 containerd[1761]: time="2025-02-13T18:55:57.701451380Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 18:55:57.704494 containerd[1761]: time="2025-02-13T18:55:57.703726660Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.704919700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.704944620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.704967740Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.704977900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.704994540Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.705005500Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 18:55:57.705994 containerd[1761]: time="2025-02-13T18:55:57.705015700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 18:55:57.706188 containerd[1761]: time="2025-02-13T18:55:57.705322060Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 18:55:57.706188 containerd[1761]: time="2025-02-13T18:55:57.705371300Z" level=info msg="Connect containerd service"
Feb 13 18:55:57.706188 containerd[1761]: time="2025-02-13T18:55:57.705413620Z" level=info msg="using legacy CRI server"
Feb 13 18:55:57.706188 containerd[1761]: time="2025-02-13T18:55:57.705420460Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 18:55:57.706188 containerd[1761]: time="2025-02-13T18:55:57.705536500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 18:55:57.710890 containerd[1761]: time="2025-02-13T18:55:57.710069860Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 18:55:57.711624 containerd[1761]: time="2025-02-13T18:55:57.711216900Z" level=info msg="Start subscribing containerd event"
Feb 13 18:55:57.711624 containerd[1761]: time="2025-02-13T18:55:57.711275540Z" level=info msg="Start recovering state"
Feb 13 18:55:57.711624 containerd[1761]: time="2025-02-13T18:55:57.711349740Z" level=info msg="Start event monitor"
Feb 13 18:55:57.711624 containerd[1761]: time="2025-02-13T18:55:57.711360740Z" level=info msg="Start snapshots syncer"
Feb 13 18:55:57.711624 containerd[1761]: time="2025-02-13T18:55:57.711369700Z" level=info msg="Start cni network conf syncer for default"
Feb 13 18:55:57.711624 containerd[1761]: time="2025-02-13T18:55:57.711379340Z" level=info msg="Start streaming server"
Feb 13 18:55:57.713383 containerd[1761]: time="2025-02-13T18:55:57.713359380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 18:55:57.713513 containerd[1761]: time="2025-02-13T18:55:57.713497860Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 18:55:57.713792 containerd[1761]: time="2025-02-13T18:55:57.713777540Z" level=info msg="containerd successfully booted in 0.127435s"
Feb 13 18:55:57.713864 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 18:55:57.798166 tar[1754]: linux-arm64/README.md
Feb 13 18:55:57.815802 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 18:55:57.894640 sshd_keygen[1728]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 18:55:57.912916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:55:57.920382 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:55:57.923749 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 18:55:57.942089 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 18:55:57.952315 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent...
Feb 13 18:55:57.963319 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 18:55:57.963502 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 18:55:57.981963 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 18:55:57.997871 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent.
Feb 13 18:55:58.014397 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 18:55:58.031100 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 18:55:58.046092 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Feb 13 18:55:58.053581 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 18:55:58.063010 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 18:55:58.074176 systemd[1]: Startup finished in 764ms (kernel) + 12.706s (initrd) + 13.873s (userspace) = 27.344s.
Feb 13 18:55:58.119286 agetty[1887]: failed to open credentials directory
Feb 13 18:55:58.119292 agetty[1888]: failed to open credentials directory
Feb 13 18:55:58.300345 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:55:58.301953 login[1888]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:55:58.310868 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 18:55:58.318898 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 18:55:58.321908 systemd-logind[1729]: New session 1 of user core.
Feb 13 18:55:58.325651 systemd-logind[1729]: New session 2 of user core.
Feb 13 18:55:58.346669 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 18:55:58.352197 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 18:55:58.359487 (systemd)[1900]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 18:55:58.393051 kubelet[1870]: E0213 18:55:58.392996    1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:55:58.395366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:55:58.395504 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:55:58.492599 systemd[1900]: Queued start job for default target default.target.
Feb 13 18:55:58.502604 systemd[1900]: Created slice app.slice - User Application Slice.
Feb 13 18:55:58.502637 systemd[1900]: Reached target paths.target - Paths.
Feb 13 18:55:58.502649 systemd[1900]: Reached target timers.target - Timers.
Feb 13 18:55:58.505880 systemd[1900]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 18:55:58.515518 systemd[1900]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 18:55:58.516277 systemd[1900]: Reached target sockets.target - Sockets.
Feb 13 18:55:58.516301 systemd[1900]: Reached target basic.target - Basic System.
Feb 13 18:55:58.516347 systemd[1900]: Reached target default.target - Main User Target.
Feb 13 18:55:58.516373 systemd[1900]: Startup finished in 150ms.
Feb 13 18:55:58.516479 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 18:55:58.517759 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 18:55:58.518438 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 18:55:59.651708 waagent[1884]: 2025-02-13T18:55:59.651230Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1
Feb 13 18:55:59.659874 waagent[1884]: 2025-02-13T18:55:59.659793Z INFO Daemon Daemon OS: flatcar 4186.1.1
Feb 13 18:55:59.666152 waagent[1884]: 2025-02-13T18:55:59.666083Z INFO Daemon Daemon Python: 3.11.10
Feb 13 18:55:59.672474 waagent[1884]: 2025-02-13T18:55:59.672367Z INFO Daemon Daemon Run daemon
Feb 13 18:55:59.678239 waagent[1884]: 2025-02-13T18:55:59.678184Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.1'
Feb 13 18:55:59.688129 waagent[1884]: 2025-02-13T18:55:59.688059Z INFO Daemon Daemon Using waagent for provisioning
Feb 13 18:55:59.696159 waagent[1884]: 2025-02-13T18:55:59.696103Z INFO Daemon Daemon Activate resource disk
Feb 13 18:55:59.701271 waagent[1884]: 2025-02-13T18:55:59.701210Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb
Feb 13 18:55:59.714669 waagent[1884]: 2025-02-13T18:55:59.714603Z INFO Daemon Daemon Found device: None
Feb 13 18:55:59.720109 waagent[1884]: 2025-02-13T18:55:59.720048Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology
Feb 13 18:55:59.733666 waagent[1884]: 2025-02-13T18:55:59.733603Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0
Feb 13 18:55:59.747145 waagent[1884]: 2025-02-13T18:55:59.747094Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Feb 13 18:55:59.754198 waagent[1884]: 2025-02-13T18:55:59.754140Z INFO Daemon Daemon Running default provisioning handler
Feb 13 18:55:59.767145 waagent[1884]: 2025-02-13T18:55:59.766580Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4.
Feb 13 18:55:59.787618 waagent[1884]: 2025-02-13T18:55:59.787548Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Feb 13 18:55:59.800506 waagent[1884]: 2025-02-13T18:55:59.800438Z INFO Daemon Daemon cloud-init is enabled: False
Feb 13 18:55:59.807038 waagent[1884]: 2025-02-13T18:55:59.806980Z INFO Daemon Daemon Copying ovf-env.xml
Feb 13 18:55:59.998843 waagent[1884]: 2025-02-13T18:55:59.998657Z INFO Daemon Daemon Successfully mounted dvd
Feb 13 18:56:00.021274 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully.
Feb 13 18:56:00.024336 waagent[1884]: 2025-02-13T18:56:00.024242Z INFO Daemon Daemon Detect protocol endpoint
Feb 13 18:56:00.030346 waagent[1884]: 2025-02-13T18:56:00.030274Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Feb 13 18:56:00.039014 waagent[1884]: 2025-02-13T18:56:00.038948Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler
Feb 13 18:56:00.048070 waagent[1884]: 2025-02-13T18:56:00.048009Z INFO Daemon Daemon Test for route to 168.63.129.16
Feb 13 18:56:00.056342 waagent[1884]: 2025-02-13T18:56:00.056285Z INFO Daemon Daemon Route to 168.63.129.16 exists
Feb 13 18:56:00.063351 waagent[1884]: 2025-02-13T18:56:00.063295Z INFO Daemon Daemon Wire server endpoint:168.63.129.16
Feb 13 18:56:00.118402 waagent[1884]: 2025-02-13T18:56:00.118353Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05
Feb 13 18:56:00.129106 waagent[1884]: 2025-02-13T18:56:00.129074Z INFO Daemon Daemon Wire protocol version:2012-11-30
Feb 13 18:56:00.137671 waagent[1884]: 2025-02-13T18:56:00.137611Z INFO Daemon Daemon Server preferred version:2015-04-05
Feb 13 18:56:00.535727 waagent[1884]: 2025-02-13T18:56:00.531664Z INFO Daemon Daemon Initializing goal state during protocol detection
Feb 13 18:56:00.538886 waagent[1884]: 2025-02-13T18:56:00.538814Z INFO Daemon Daemon Forcing an update of the goal state.
Feb 13 18:56:00.549620 waagent[1884]: 2025-02-13T18:56:00.549564Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1]
Feb 13 18:56:00.911645 waagent[1884]: 2025-02-13T18:56:00.911598Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159
Feb 13 18:56:00.918972 waagent[1884]: 2025-02-13T18:56:00.918920Z INFO Daemon
Feb 13 18:56:00.923394 waagent[1884]: 2025-02-13T18:56:00.923341Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: dbf1bcbe-c78f-4ec9-b2fe-734c69c31d97 eTag: 9005364897108872658 source: Fabric]
Feb 13 18:56:00.940102 waagent[1884]: 2025-02-13T18:56:00.940055Z INFO Daemon The vmSettings originated via Fabric; will ignore them.
Feb 13 18:56:00.948082 waagent[1884]: 2025-02-13T18:56:00.948034Z INFO Daemon
Feb 13 18:56:00.951283 waagent[1884]: 2025-02-13T18:56:00.951239Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1]
Feb 13 18:56:00.963058 waagent[1884]: 2025-02-13T18:56:00.963020Z INFO Daemon Daemon Downloading artifacts profile blob
Feb 13 18:56:01.077806 waagent[1884]: 2025-02-13T18:56:01.077703Z INFO Daemon Downloaded certificate {'thumbprint': '3D9523F92EB7DFF4CF777A4AFDE0F91B6D8F2548', 'hasPrivateKey': True}
Feb 13 18:56:01.090465 waagent[1884]: 2025-02-13T18:56:01.090412Z INFO Daemon Downloaded certificate {'thumbprint': '32F0BE82A5EA9CF550E00CD92AD3EB450D74EFB4', 'hasPrivateKey': False}
Feb 13 18:56:01.103985 waagent[1884]: 2025-02-13T18:56:01.103933Z INFO Daemon Fetch goal state completed
Feb 13 18:56:01.117505 waagent[1884]: 2025-02-13T18:56:01.117460Z INFO Daemon Daemon Starting provisioning
Feb 13 18:56:01.123823 waagent[1884]: 2025-02-13T18:56:01.123754Z INFO Daemon Daemon Handle ovf-env.xml.
Feb 13 18:56:01.129832 waagent[1884]: 2025-02-13T18:56:01.129774Z INFO Daemon Daemon Set hostname [ci-4186.1.1-a-21f48afc48]
Feb 13 18:56:01.154731 waagent[1884]: 2025-02-13T18:56:01.154192Z INFO Daemon Daemon Publish hostname [ci-4186.1.1-a-21f48afc48]
Feb 13 18:56:01.164310 waagent[1884]: 2025-02-13T18:56:01.164205Z INFO Daemon Daemon Examine /proc/net/route for primary interface
Feb 13 18:56:01.173568 waagent[1884]: 2025-02-13T18:56:01.173500Z INFO Daemon Daemon Primary interface is [eth0]
Feb 13 18:56:01.260210 systemd-networkd[1337]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:56:01.260224 systemd-networkd[1337]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 18:56:01.260252 systemd-networkd[1337]: eth0: DHCP lease lost
Feb 13 18:56:01.261296 waagent[1884]: 2025-02-13T18:56:01.261205Z INFO Daemon Daemon Create user account if not exists
Feb 13 18:56:01.267048 waagent[1884]: 2025-02-13T18:56:01.266988Z INFO Daemon Daemon User core already exists, skip useradd
Feb 13 18:56:01.274763 systemd-networkd[1337]: eth0: DHCPv6 lease lost
Feb 13 18:56:01.275192 waagent[1884]: 2025-02-13T18:56:01.274906Z INFO Daemon Daemon Configure sudoer
Feb 13 18:56:01.279883 waagent[1884]: 2025-02-13T18:56:01.279815Z INFO Daemon Daemon Configure sshd
Feb 13 18:56:01.284456 waagent[1884]: 2025-02-13T18:56:01.284394Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive.
Feb 13 18:56:01.298881 waagent[1884]: 2025-02-13T18:56:01.298811Z INFO Daemon Daemon Deploy ssh public key.
Feb 13 18:56:01.314787 systemd-networkd[1337]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16
Feb 13 18:56:02.405284 waagent[1884]: 2025-02-13T18:56:02.399433Z INFO Daemon Daemon Provisioning complete
Feb 13 18:56:02.421398 waagent[1884]: 2025-02-13T18:56:02.421345Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping
Feb 13 18:56:02.428092 waagent[1884]: 2025-02-13T18:56:02.428028Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions.
Feb 13 18:56:02.439013 waagent[1884]: 2025-02-13T18:56:02.438949Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent
Feb 13 18:56:02.578882 waagent[1957]: 2025-02-13T18:56:02.578310Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1)
Feb 13 18:56:02.578882 waagent[1957]: 2025-02-13T18:56:02.578476Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.1
Feb 13 18:56:02.578882 waagent[1957]: 2025-02-13T18:56:02.578528Z INFO ExtHandler ExtHandler Python: 3.11.10
Feb 13 18:56:02.795801 waagent[1957]: 2025-02-13T18:56:02.795606Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Feb 13 18:56:02.795931 waagent[1957]: 2025-02-13T18:56:02.795889Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb 13 18:56:02.795989 waagent[1957]: 2025-02-13T18:56:02.795962Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Feb 13 18:56:02.804813 waagent[1957]: 2025-02-13T18:56:02.804743Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1]
Feb 13 18:56:03.276280 waagent[1957]: 2025-02-13T18:56:03.276228Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159
Feb 13 18:56:03.276891 waagent[1957]: 2025-02-13T18:56:03.276848Z INFO ExtHandler
Feb 13 18:56:03.276967 waagent[1957]: 2025-02-13T18:56:03.276937Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f5632e7f-9113-40bf-a854-6578836c801b eTag: 9005364897108872658 source: Fabric]
Feb 13 18:56:03.277270 waagent[1957]: 2025-02-13T18:56:03.277232Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them.
Feb 13 18:56:03.277871 waagent[1957]: 2025-02-13T18:56:03.277824Z INFO ExtHandler
Feb 13 18:56:03.277935 waagent[1957]: 2025-02-13T18:56:03.277905Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1]
Feb 13 18:56:03.282618 waagent[1957]: 2025-02-13T18:56:03.282572Z INFO ExtHandler ExtHandler Downloading artifacts profile blob
Feb 13 18:56:03.368760 waagent[1957]: 2025-02-13T18:56:03.368277Z INFO ExtHandler Downloaded certificate {'thumbprint': '3D9523F92EB7DFF4CF777A4AFDE0F91B6D8F2548', 'hasPrivateKey': True}
Feb 13 18:56:03.368857 waagent[1957]: 2025-02-13T18:56:03.368797Z INFO ExtHandler Downloaded certificate {'thumbprint': '32F0BE82A5EA9CF550E00CD92AD3EB450D74EFB4', 'hasPrivateKey': False}
Feb 13 18:56:03.369288 waagent[1957]: 2025-02-13T18:56:03.369236Z INFO ExtHandler Fetch goal state completed
Feb 13 18:56:03.388102 waagent[1957]: 2025-02-13T18:56:03.388043Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1957
Feb 13 18:56:03.388251 waagent[1957]: 2025-02-13T18:56:03.388217Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ********
Feb 13 18:56:03.389935 waagent[1957]: 2025-02-13T18:56:03.389884Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.1', '', 'Flatcar Container Linux by Kinvolk']
Feb 13 18:56:03.390331 waagent[1957]: 2025-02-13T18:56:03.390290Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Feb 13 18:56:03.421803 waagent[1957]: 2025-02-13T18:56:03.421760Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Feb 13 18:56:03.422000 waagent[1957]: 2025-02-13T18:56:03.421962Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Feb 13 18:56:03.428711 waagent[1957]: 2025-02-13T18:56:03.428186Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Feb 13 18:56:03.435041 systemd[1]: Reloading requested from client PID 1972 ('systemctl') (unit waagent.service)...
Feb 13 18:56:03.435057 systemd[1]: Reloading...
Feb 13 18:56:03.517740 zram_generator::config[2005]: No configuration found.
Feb 13 18:56:03.624582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:56:03.705450 systemd[1]: Reloading finished in 270 ms.
Feb 13 18:56:03.728334 waagent[1957]: 2025-02-13T18:56:03.727959Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service
Feb 13 18:56:03.735123 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit waagent.service)...
Feb 13 18:56:03.735142 systemd[1]: Reloading...
Feb 13 18:56:03.817714 zram_generator::config[2097]: No configuration found.
Feb 13 18:56:03.934167 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:56:04.014128 systemd[1]: Reloading finished in 278 ms.
Feb 13 18:56:04.035748 waagent[1957]: 2025-02-13T18:56:04.034940Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service
Feb 13 18:56:04.035748 waagent[1957]: 2025-02-13T18:56:04.035142Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully
Feb 13 18:56:04.437728 waagent[1957]: 2025-02-13T18:56:04.436855Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.
Feb 13 18:56:04.437728 waagent[1957]: 2025-02-13T18:56:04.437483Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True]
Feb 13 18:56:04.438363 waagent[1957]: 2025-02-13T18:56:04.438280Z INFO ExtHandler ExtHandler Starting env monitor service.
Feb 13 18:56:04.438882 waagent[1957]: 2025-02-13T18:56:04.438714Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Feb 13 18:56:04.439754 waagent[1957]: 2025-02-13T18:56:04.439113Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb 13 18:56:04.439754 waagent[1957]: 2025-02-13T18:56:04.439201Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Feb 13 18:56:04.439754 waagent[1957]: 2025-02-13T18:56:04.439401Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Feb 13 18:56:04.439754 waagent[1957]: 2025-02-13T18:56:04.439626Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Feb 13 18:56:04.439754 waagent[1957]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Feb 13 18:56:04.439754 waagent[1957]: eth0        00000000        0114C80A        0003        0        0        1024        00000000        0        0        0
Feb 13 18:56:04.439754 waagent[1957]: eth0        0014C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Feb 13 18:56:04.439754 waagent[1957]: eth0        0114C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Feb 13 18:56:04.439754 waagent[1957]: eth0        10813FA8        0114C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb 13 18:56:04.439754 waagent[1957]: eth0        FEA9FEA9        0114C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb 13 18:56:04.440099 waagent[1957]: 2025-02-13T18:56:04.440046Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb 13 18:56:04.440274 waagent[1957]: 2025-02-13T18:56:04.440226Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Feb 13 18:56:04.440390 waagent[1957]: 2025-02-13T18:56:04.440334Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Feb 13 18:56:04.440517 waagent[1957]: 2025-02-13T18:56:04.440472Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Feb 13 18:56:04.440797 waagent[1957]: 2025-02-13T18:56:04.440746Z INFO EnvHandler ExtHandler Configure routes
Feb 13 18:56:04.441205 waagent[1957]: 2025-02-13T18:56:04.441145Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Feb 13 18:56:04.441369 waagent[1957]: 2025-02-13T18:56:04.441325Z INFO EnvHandler ExtHandler Gateway:None
Feb 13 18:56:04.441479 waagent[1957]: 2025-02-13T18:56:04.441436Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Feb 13 18:56:04.441605 waagent[1957]: 2025-02-13T18:56:04.441537Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Feb 13 18:56:04.441818 waagent[1957]: 2025-02-13T18:56:04.441771Z INFO EnvHandler ExtHandler Routes:None
Feb 13 18:56:04.461227 waagent[1957]: 2025-02-13T18:56:04.461165Z INFO ExtHandler ExtHandler
Feb 13 18:56:04.461333 waagent[1957]: 2025-02-13T18:56:04.461300Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 847f860c-6862-4e8b-859c-78703e1cdc74 correlation 309b9155-870d-4b68-b4ac-05bcb48fc7fd created: 2025-02-13T18:54:41.409470Z]
Feb 13 18:56:04.461784 waagent[1957]: 2025-02-13T18:56:04.461739Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything.
Feb 13 18:56:04.462382 waagent[1957]: 2025-02-13T18:56:04.462342Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms]
Feb 13 18:56:04.482888 waagent[1957]: 2025-02-13T18:56:04.482800Z INFO MonitorHandler ExtHandler Network interfaces:
Feb 13 18:56:04.482888 waagent[1957]: Executing ['ip', '-a', '-o', 'link']:
Feb 13 18:56:04.482888 waagent[1957]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Feb 13 18:56:04.482888 waagent[1957]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 00:22:48:bb:29:e8 brd ff:ff:ff:ff:ff:ff
Feb 13 18:56:04.482888 waagent[1957]: 3: enP1331s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 00:22:48:bb:29:e8 brd ff:ff:ff:ff:ff:ff\    altname enP1331p0s2
Feb 13 18:56:04.482888 waagent[1957]: Executing ['ip', '-4', '-a', '-o', 'address']:
Feb 13 18:56:04.482888 waagent[1957]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Feb 13 18:56:04.482888 waagent[1957]: 2: eth0    inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\       valid_lft forever preferred_lft forever
Feb 13 18:56:04.482888 waagent[1957]: Executing ['ip', '-6', '-a', '-o', 'address']:
Feb 13 18:56:04.482888 waagent[1957]: 1: lo    inet6 ::1/128 scope host noprefixroute \       valid_lft forever preferred_lft forever
Feb 13 18:56:04.482888 waagent[1957]: 2: eth0    inet6 fe80::222:48ff:febb:29e8/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Feb 13 18:56:04.482888 waagent[1957]: 3: enP1331s1    inet6 fe80::222:48ff:febb:29e8/64 scope link proto kernel_ll \       valid_lft forever preferred_lft forever
Feb 13 18:56:04.577823 waagent[1957]: 2025-02-13T18:56:04.577204Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules:
Feb 13 18:56:04.577823 waagent[1957]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 18:56:04.577823 waagent[1957]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 18:56:04.577823 waagent[1957]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Feb 13 18:56:04.577823 waagent[1957]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 18:56:04.577823 waagent[1957]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 18:56:04.577823 waagent[1957]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 18:56:04.577823 waagent[1957]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Feb 13 18:56:04.577823 waagent[1957]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Feb 13 18:56:04.577823 waagent[1957]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Feb 13 18:56:04.580555 waagent[1957]: 2025-02-13T18:56:04.580478Z INFO EnvHandler ExtHandler Current Firewall rules:
Feb 13 18:56:04.580555 waagent[1957]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 18:56:04.580555 waagent[1957]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 18:56:04.580555 waagent[1957]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Feb 13 18:56:04.580555 waagent[1957]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 18:56:04.580555 waagent[1957]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Feb 13 18:56:04.580555 waagent[1957]:     pkts      bytes target     prot opt in     out     source               destination
Feb 13 18:56:04.580555 waagent[1957]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Feb 13 18:56:04.580555 waagent[1957]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Feb 13 18:56:04.580555 waagent[1957]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Feb 13 18:56:04.580860 waagent[1957]: 2025-02-13T18:56:04.580819Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300
Feb 13 18:56:05.082902 waagent[1957]: 2025-02-13T18:56:05.082763Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7FA640FA-8DDE-4F64-B15F-9405D90AF5ED;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0]
Feb 13 18:56:08.563684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 18:56:08.570890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:56:08.681164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:56:08.684858 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:56:08.755664 kubelet[2191]: E0213 18:56:08.755603    2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:56:08.759040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:56:08.759166 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:56:18.813724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 18:56:18.821924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:56:18.910238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:56:18.914412 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:56:19.044836 kubelet[2207]: E0213 18:56:19.044782    2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:56:19.046741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:56:19.046868 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:56:20.400849 chronyd[1718]: Selected source PHC0
Feb 13 18:56:29.063821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Feb 13 18:56:29.072870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:56:29.165899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:56:29.170171 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:56:29.222980 kubelet[2222]: E0213 18:56:29.222910    2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:56:29.225516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:56:29.225661 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:56:37.491658 kernel: hv_balloon: Max. dynamic memory size: 4096 MB
Feb 13 18:56:39.313729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Feb 13 18:56:39.324269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:56:39.413538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:56:39.417619 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:56:39.453717 kubelet[2237]: E0213 18:56:39.453647    2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:56:39.455879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:56:39.456194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:56:41.739336 update_engine[1735]: I20250213 18:56:41.738672  1735 update_attempter.cc:509] Updating boot flags...
Feb 13 18:56:41.810763 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2259)
Feb 13 18:56:48.642521 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 18:56:48.645136 systemd[1]: Started sshd@0-10.200.20.41:22-10.200.16.10:54316.service - OpenSSH per-connection server daemon (10.200.16.10:54316).
Feb 13 18:56:49.222705 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 54316 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:49.223970 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:49.227727 systemd-logind[1729]: New session 3 of user core.
Feb 13 18:56:49.233911 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 18:56:49.563569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
Feb 13 18:56:49.571863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:56:49.639235 systemd[1]: Started sshd@1-10.200.20.41:22-10.200.16.10:55608.service - OpenSSH per-connection server daemon (10.200.16.10:55608).
Feb 13 18:56:49.671558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:56:49.675772 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:56:49.708827 kubelet[2323]: E0213 18:56:49.708774    2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:56:49.711465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:56:49.711588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:56:50.087903 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 55608 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:50.089214 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:50.093020 systemd-logind[1729]: New session 4 of user core.
Feb 13 18:56:50.100853 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 18:56:50.422878 sshd[2330]: Connection closed by 10.200.16.10 port 55608
Feb 13 18:56:50.425865 sshd-session[2316]: pam_unix(sshd:session): session closed for user core
Feb 13 18:56:50.428839 systemd[1]: sshd@1-10.200.20.41:22-10.200.16.10:55608.service: Deactivated successfully.
Feb 13 18:56:50.430202 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 18:56:50.430791 systemd-logind[1729]: Session 4 logged out. Waiting for processes to exit.
Feb 13 18:56:50.431913 systemd-logind[1729]: Removed session 4.
Feb 13 18:56:50.506151 systemd[1]: Started sshd@2-10.200.20.41:22-10.200.16.10:55622.service - OpenSSH per-connection server daemon (10.200.16.10:55622).
Feb 13 18:56:50.953718 sshd[2335]: Accepted publickey for core from 10.200.16.10 port 55622 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:50.954976 sshd-session[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:50.958589 systemd-logind[1729]: New session 5 of user core.
Feb 13 18:56:50.965854 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 18:56:51.276542 sshd[2337]: Connection closed by 10.200.16.10 port 55622
Feb 13 18:56:51.276372 sshd-session[2335]: pam_unix(sshd:session): session closed for user core
Feb 13 18:56:51.279841 systemd[1]: sshd@2-10.200.20.41:22-10.200.16.10:55622.service: Deactivated successfully.
Feb 13 18:56:51.281213 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 18:56:51.282338 systemd-logind[1729]: Session 5 logged out. Waiting for processes to exit.
Feb 13 18:56:51.283314 systemd-logind[1729]: Removed session 5.
Feb 13 18:56:51.355983 systemd[1]: Started sshd@3-10.200.20.41:22-10.200.16.10:55628.service - OpenSSH per-connection server daemon (10.200.16.10:55628).
Feb 13 18:56:51.803213 sshd[2342]: Accepted publickey for core from 10.200.16.10 port 55628 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:51.804432 sshd-session[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:51.807979 systemd-logind[1729]: New session 6 of user core.
Feb 13 18:56:51.817839 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 18:56:52.128556 sshd[2344]: Connection closed by 10.200.16.10 port 55628
Feb 13 18:56:52.129113 sshd-session[2342]: pam_unix(sshd:session): session closed for user core
Feb 13 18:56:52.132335 systemd[1]: sshd@3-10.200.20.41:22-10.200.16.10:55628.service: Deactivated successfully.
Feb 13 18:56:52.133952 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 18:56:52.134563 systemd-logind[1729]: Session 6 logged out. Waiting for processes to exit.
Feb 13 18:56:52.135562 systemd-logind[1729]: Removed session 6.
Feb 13 18:56:52.217065 systemd[1]: Started sshd@4-10.200.20.41:22-10.200.16.10:55638.service - OpenSSH per-connection server daemon (10.200.16.10:55638).
Feb 13 18:56:52.703040 sshd[2349]: Accepted publickey for core from 10.200.16.10 port 55638 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:52.704297 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:52.708050 systemd-logind[1729]: New session 7 of user core.
Feb 13 18:56:52.716845 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 18:56:53.125808 sudo[2352]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 18:56:53.126087 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:56:53.155474 sudo[2352]: pam_unix(sudo:session): session closed for user root
Feb 13 18:56:53.230401 sshd[2351]: Connection closed by 10.200.16.10 port 55638
Feb 13 18:56:53.229604 sshd-session[2349]: pam_unix(sshd:session): session closed for user core
Feb 13 18:56:53.232633 systemd[1]: sshd@4-10.200.20.41:22-10.200.16.10:55638.service: Deactivated successfully.
Feb 13 18:56:53.234251 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 18:56:53.235460 systemd-logind[1729]: Session 7 logged out. Waiting for processes to exit.
Feb 13 18:56:53.236329 systemd-logind[1729]: Removed session 7.
Feb 13 18:56:53.317844 systemd[1]: Started sshd@5-10.200.20.41:22-10.200.16.10:55642.service - OpenSSH per-connection server daemon (10.200.16.10:55642).
Feb 13 18:56:53.774709 sshd[2357]: Accepted publickey for core from 10.200.16.10 port 55642 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:53.776061 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:53.780717 systemd-logind[1729]: New session 8 of user core.
Feb 13 18:56:53.786841 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 18:56:54.027806 sudo[2361]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 18:56:54.028267 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:56:54.031291 sudo[2361]: pam_unix(sudo:session): session closed for user root
Feb 13 18:56:54.035503 sudo[2360]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 18:56:54.035822 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:56:54.051978 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 18:56:54.073602 augenrules[2383]: No rules
Feb 13 18:56:54.074652 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 18:56:54.075788 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 18:56:54.077339 sudo[2360]: pam_unix(sudo:session): session closed for user root
Feb 13 18:56:54.151732 sshd[2359]: Connection closed by 10.200.16.10 port 55642
Feb 13 18:56:54.152169 sshd-session[2357]: pam_unix(sshd:session): session closed for user core
Feb 13 18:56:54.154920 systemd[1]: sshd@5-10.200.20.41:22-10.200.16.10:55642.service: Deactivated successfully.
Feb 13 18:56:54.156869 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 18:56:54.158447 systemd-logind[1729]: Session 8 logged out. Waiting for processes to exit.
Feb 13 18:56:54.159395 systemd-logind[1729]: Removed session 8.
Feb 13 18:56:54.231614 systemd[1]: Started sshd@6-10.200.20.41:22-10.200.16.10:55658.service - OpenSSH per-connection server daemon (10.200.16.10:55658).
Feb 13 18:56:54.679624 sshd[2391]: Accepted publickey for core from 10.200.16.10 port 55658 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:56:54.680896 sshd-session[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:56:54.684533 systemd-logind[1729]: New session 9 of user core.
Feb 13 18:56:54.691848 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 18:56:54.932494 sudo[2394]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 18:56:54.933376 sudo[2394]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:56:56.355965 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 18:56:56.356051 (dockerd)[2413]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 18:56:57.220170 dockerd[2413]: time="2025-02-13T18:56:57.220121124Z" level=info msg="Starting up"
Feb 13 18:56:57.510110 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport836935290-merged.mount: Deactivated successfully.
Feb 13 18:56:57.543070 dockerd[2413]: time="2025-02-13T18:56:57.543029996Z" level=info msg="Loading containers: start."
Feb 13 18:56:57.770816 kernel: Initializing XFRM netlink socket
Feb 13 18:56:57.902392 systemd-networkd[1337]: docker0: Link UP
Feb 13 18:56:57.936321 dockerd[2413]: time="2025-02-13T18:56:57.935746513Z" level=info msg="Loading containers: done."
Feb 13 18:56:57.960556 dockerd[2413]: time="2025-02-13T18:56:57.960501500Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 18:56:57.960726 dockerd[2413]: time="2025-02-13T18:56:57.960611500Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Feb 13 18:56:57.960795 dockerd[2413]: time="2025-02-13T18:56:57.960755980Z" level=info msg="Daemon has completed initialization"
Feb 13 18:56:58.006457 dockerd[2413]: time="2025-02-13T18:56:58.006339916Z" level=info msg="API listen on /run/docker.sock"
Feb 13 18:56:58.006543 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 18:56:58.508110 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck325806951-merged.mount: Deactivated successfully.
Feb 13 18:56:58.767522 containerd[1761]: time="2025-02-13T18:56:58.767409602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\""
Feb 13 18:56:59.560589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997042121.mount: Deactivated successfully.
Feb 13 18:56:59.813715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
Feb 13 18:56:59.818969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:56:59.976946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:56:59.982118 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:57:00.023701 kubelet[2621]: E0213 18:57:00.023650    2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:57:00.026220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:57:00.026365 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:57:01.269932 containerd[1761]: time="2025-02-13T18:57:01.269867953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:01.272747 containerd[1761]: time="2025-02-13T18:57:01.272706712Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236"
Feb 13 18:57:01.275610 containerd[1761]: time="2025-02-13T18:57:01.275559151Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:01.280819 containerd[1761]: time="2025-02-13T18:57:01.280772190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:01.282127 containerd[1761]: time="2025-02-13T18:57:01.281687069Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.514235587s"
Feb 13 18:57:01.282127 containerd[1761]: time="2025-02-13T18:57:01.281738589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\""
Feb 13 18:57:01.282609 containerd[1761]: time="2025-02-13T18:57:01.282407789Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\""
Feb 13 18:57:02.578964 containerd[1761]: time="2025-02-13T18:57:02.578904035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:02.585142 containerd[1761]: time="2025-02-13T18:57:02.585074473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145"
Feb 13 18:57:02.589128 containerd[1761]: time="2025-02-13T18:57:02.589078151Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:02.595964 containerd[1761]: time="2025-02-13T18:57:02.595918669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:02.597352 containerd[1761]: time="2025-02-13T18:57:02.597018909Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.31458392s"
Feb 13 18:57:02.597352 containerd[1761]: time="2025-02-13T18:57:02.597051429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\""
Feb 13 18:57:02.597523 containerd[1761]: time="2025-02-13T18:57:02.597493669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\""
Feb 13 18:57:03.800992 containerd[1761]: time="2025-02-13T18:57:03.800934623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:03.803277 containerd[1761]: time="2025-02-13T18:57:03.803237742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800"
Feb 13 18:57:03.806739 containerd[1761]: time="2025-02-13T18:57:03.806682021Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:03.812567 containerd[1761]: time="2025-02-13T18:57:03.812505899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:03.813734 containerd[1761]: time="2025-02-13T18:57:03.813586779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.21606243s"
Feb 13 18:57:03.813734 containerd[1761]: time="2025-02-13T18:57:03.813615299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\""
Feb 13 18:57:03.814286 containerd[1761]: time="2025-02-13T18:57:03.814267339Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\""
Feb 13 18:57:05.518131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182041439.mount: Deactivated successfully.
Feb 13 18:57:05.848214 containerd[1761]: time="2025-02-13T18:57:05.848172839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:05.851006 containerd[1761]: time="2025-02-13T18:57:05.850968279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382"
Feb 13 18:57:05.858613 containerd[1761]: time="2025-02-13T18:57:05.858591236Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:05.863678 containerd[1761]: time="2025-02-13T18:57:05.863630555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:05.864603 containerd[1761]: time="2025-02-13T18:57:05.864483594Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 2.050113456s"
Feb 13 18:57:05.864603 containerd[1761]: time="2025-02-13T18:57:05.864517154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\""
Feb 13 18:57:05.865356 containerd[1761]: time="2025-02-13T18:57:05.865323154Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\""
Feb 13 18:57:06.550159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222171919.mount: Deactivated successfully.
Feb 13 18:57:07.556727 containerd[1761]: time="2025-02-13T18:57:07.556620039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:07.559140 containerd[1761]: time="2025-02-13T18:57:07.558836319Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622"
Feb 13 18:57:07.565221 containerd[1761]: time="2025-02-13T18:57:07.565154997Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:07.571286 containerd[1761]: time="2025-02-13T18:57:07.571230235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:07.572521 containerd[1761]: time="2025-02-13T18:57:07.572381915Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.706952721s"
Feb 13 18:57:07.572521 containerd[1761]: time="2025-02-13T18:57:07.572413194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\""
Feb 13 18:57:07.573127 containerd[1761]: time="2025-02-13T18:57:07.572925554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Feb 13 18:57:08.126626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862977606.mount: Deactivated successfully.
Feb 13 18:57:08.152746 containerd[1761]: time="2025-02-13T18:57:08.152342378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:08.155367 containerd[1761]: time="2025-02-13T18:57:08.155295537Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703"
Feb 13 18:57:08.158282 containerd[1761]: time="2025-02-13T18:57:08.158232416Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:08.164500 containerd[1761]: time="2025-02-13T18:57:08.164438574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:08.165603 containerd[1761]: time="2025-02-13T18:57:08.165172854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 592.09842ms"
Feb 13 18:57:08.165603 containerd[1761]: time="2025-02-13T18:57:08.165203494Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\""
Feb 13 18:57:08.165770 containerd[1761]: time="2025-02-13T18:57:08.165753774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\""
Feb 13 18:57:08.799447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770545510.mount: Deactivated successfully.
Feb 13 18:57:10.063734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
Feb 13 18:57:10.068893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:57:10.176306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:57:10.188968 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:57:10.223661 kubelet[2761]: E0213 18:57:10.223552    2761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:57:10.225795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:57:10.225938 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:57:11.918147 containerd[1761]: time="2025-02-13T18:57:11.918088537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:11.920674 containerd[1761]: time="2025-02-13T18:57:11.920407216Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429"
Feb 13 18:57:11.924215 containerd[1761]: time="2025-02-13T18:57:11.924150455Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:11.929764 containerd[1761]: time="2025-02-13T18:57:11.929717452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:11.930594 containerd[1761]: time="2025-02-13T18:57:11.930466892Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.764686758s"
Feb 13 18:57:11.930594 containerd[1761]: time="2025-02-13T18:57:11.930495172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\""
Feb 13 18:57:17.392195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:57:17.403045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:57:17.433205 systemd[1]: Reloading requested from client PID 2837 ('systemctl') (unit session-9.scope)...
Feb 13 18:57:17.433228 systemd[1]: Reloading...
Feb 13 18:57:17.534851 zram_generator::config[2877]: No configuration found.
Feb 13 18:57:17.635632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:57:17.715780 systemd[1]: Reloading finished in 282 ms.
Feb 13 18:57:17.763077 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:57:17.763346 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 18:57:17.764747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:57:17.767810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:57:17.879367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:57:17.892273 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 18:57:17.928444 kubelet[2947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 18:57:17.929721 kubelet[2947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI.
Feb 13 18:57:17.929721 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 18:57:17.929721 kubelet[2947]: I0213 18:57:17.928944    2947 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 18:57:18.568246 kubelet[2947]: I0213 18:57:18.568211    2947 server.go:520] "Kubelet version" kubeletVersion="v1.32.0"
Feb 13 18:57:18.569388 kubelet[2947]: I0213 18:57:18.568601    2947 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 18:57:18.569388 kubelet[2947]: I0213 18:57:18.568886    2947 server.go:954] "Client rotation is on, will bootstrap in background"
Feb 13 18:57:18.587541 kubelet[2947]: E0213 18:57:18.587495    2947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:18.589852 kubelet[2947]: I0213 18:57:18.589709    2947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 18:57:18.597753 kubelet[2947]: E0213 18:57:18.596786    2947 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 18:57:18.597753 kubelet[2947]: I0213 18:57:18.596816    2947 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 18:57:18.602929 kubelet[2947]: I0213 18:57:18.602896    2947 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 18:57:18.603876 kubelet[2947]: I0213 18:57:18.603833    2947 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 18:57:18.604136 kubelet[2947]: I0213 18:57:18.603878    2947 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-a-21f48afc48","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 18:57:18.604228 kubelet[2947]: I0213 18:57:18.604148    2947 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 18:57:18.604228 kubelet[2947]: I0213 18:57:18.604161    2947 container_manager_linux.go:304] "Creating device plugin manager"
Feb 13 18:57:18.604321 kubelet[2947]: I0213 18:57:18.604300    2947 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 18:57:18.608579 kubelet[2947]: I0213 18:57:18.608549    2947 kubelet.go:446] "Attempting to sync node with API server"
Feb 13 18:57:18.608632 kubelet[2947]: I0213 18:57:18.608587    2947 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 18:57:18.608632 kubelet[2947]: I0213 18:57:18.608611    2947 kubelet.go:352] "Adding apiserver pod source"
Feb 13 18:57:18.608632 kubelet[2947]: I0213 18:57:18.608621    2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 18:57:18.612516 kubelet[2947]: W0213 18:57:18.610950    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:18.612516 kubelet[2947]: E0213 18:57:18.611007    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:18.612516 kubelet[2947]: W0213 18:57:18.611068    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-21f48afc48&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:18.612516 kubelet[2947]: E0213 18:57:18.611092    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-21f48afc48&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:18.612516 kubelet[2947]: I0213 18:57:18.611467    2947 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 18:57:18.612516 kubelet[2947]: I0213 18:57:18.611959    2947 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 18:57:18.612516 kubelet[2947]: W0213 18:57:18.612012    2947 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 18:57:18.613796 kubelet[2947]: I0213 18:57:18.613571    2947 watchdog_linux.go:99] "Systemd watchdog is not enabled"
Feb 13 18:57:18.613796 kubelet[2947]: I0213 18:57:18.613605    2947 server.go:1287] "Started kubelet"
Feb 13 18:57:18.617870 kubelet[2947]: E0213 18:57:18.617465    2947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.1-a-21f48afc48.1823d98332eea5d6  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-a-21f48afc48,UID:ci-4186.1.1-a-21f48afc48,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-a-21f48afc48,},FirstTimestamp:2025-02-13 18:57:18.613587414 +0000 UTC m=+0.718084881,LastTimestamp:2025-02-13 18:57:18.613587414 +0000 UTC m=+0.718084881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-a-21f48afc48,}"
Feb 13 18:57:18.617870 kubelet[2947]: I0213 18:57:18.617675    2947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 18:57:18.618093 kubelet[2947]: I0213 18:57:18.618017    2947 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 18:57:18.618093 kubelet[2947]: I0213 18:57:18.618085    2947 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 18:57:18.618956 kubelet[2947]: I0213 18:57:18.618924    2947 server.go:490] "Adding debug handlers to kubelet server"
Feb 13 18:57:18.619244 kubelet[2947]: I0213 18:57:18.619228    2947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 18:57:18.620600 kubelet[2947]: I0213 18:57:18.620553    2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 18:57:18.622541 kubelet[2947]: E0213 18:57:18.622121    2947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-21f48afc48\" not found"
Feb 13 18:57:18.622541 kubelet[2947]: I0213 18:57:18.622168    2947 volume_manager.go:297] "Starting Kubelet Volume Manager"
Feb 13 18:57:18.622541 kubelet[2947]: I0213 18:57:18.622342    2947 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 18:57:18.622541 kubelet[2947]: I0213 18:57:18.622401    2947 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 18:57:18.622841 kubelet[2947]: W0213 18:57:18.622794    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:18.622899 kubelet[2947]: E0213 18:57:18.622844    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:18.623609 kubelet[2947]: E0213 18:57:18.623370    2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-21f48afc48?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="200ms"
Feb 13 18:57:18.624108 kubelet[2947]: I0213 18:57:18.624074    2947 factory.go:221] Registration of the systemd container factory successfully
Feb 13 18:57:18.624199 kubelet[2947]: I0213 18:57:18.624174    2947 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 18:57:18.624536 kubelet[2947]: E0213 18:57:18.624494    2947 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 18:57:18.625575 kubelet[2947]: I0213 18:57:18.625552    2947 factory.go:221] Registration of the containerd container factory successfully
Feb 13 18:57:18.667389 kubelet[2947]: I0213 18:57:18.667348    2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 18:57:18.669623 kubelet[2947]: I0213 18:57:18.669265    2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 18:57:18.669623 kubelet[2947]: I0213 18:57:18.669296    2947 status_manager.go:227] "Starting to sync pod status with apiserver"
Feb 13 18:57:18.669623 kubelet[2947]: I0213 18:57:18.669321    2947 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Feb 13 18:57:18.669623 kubelet[2947]: I0213 18:57:18.669327    2947 kubelet.go:2388] "Starting kubelet main sync loop"
Feb 13 18:57:18.669623 kubelet[2947]: E0213 18:57:18.669372    2947 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 18:57:18.677801 kubelet[2947]: W0213 18:57:18.677749    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:18.678102 kubelet[2947]: E0213 18:57:18.677963    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:18.722299 kubelet[2947]: E0213 18:57:18.722255    2947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.1-a-21f48afc48\" not found"
Feb 13 18:57:18.759120 kubelet[2947]: I0213 18:57:18.758853    2947 cpu_manager.go:221] "Starting CPU manager" policy="none"
Feb 13 18:57:18.759120 kubelet[2947]: I0213 18:57:18.758871    2947 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
Feb 13 18:57:18.759120 kubelet[2947]: I0213 18:57:18.758898    2947 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 18:57:18.764368 kubelet[2947]: I0213 18:57:18.764107    2947 policy_none.go:49] "None policy: Start"
Feb 13 18:57:18.764368 kubelet[2947]: I0213 18:57:18.764133    2947 memory_manager.go:186] "Starting memorymanager" policy="None"
Feb 13 18:57:18.764368 kubelet[2947]: I0213 18:57:18.764145    2947 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 18:57:18.769628 kubelet[2947]: E0213 18:57:18.769590    2947 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 18:57:18.771718 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 18:57:18.783878 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 18:57:18.787146 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 18:57:18.795711 kubelet[2947]: I0213 18:57:18.795661    2947 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 18:57:18.796128 kubelet[2947]: I0213 18:57:18.795890    2947 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 18:57:18.796128 kubelet[2947]: I0213 18:57:18.795910    2947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 18:57:18.796209 kubelet[2947]: I0213 18:57:18.796159    2947 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 18:57:18.798437 kubelet[2947]: E0213 18:57:18.798407    2947 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
Feb 13 18:57:18.798547 kubelet[2947]: E0213 18:57:18.798496    2947 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.1-a-21f48afc48\" not found"
Feb 13 18:57:18.824905 kubelet[2947]: E0213 18:57:18.824219    2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-21f48afc48?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="400ms"
Feb 13 18:57:18.898173 kubelet[2947]: I0213 18:57:18.898074    2947 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:18.898490 kubelet[2947]: E0213 18:57:18.898440    2947 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:18.980259 systemd[1]: Created slice kubepods-burstable-podc676114a2f00c154f031842422933bad.slice - libcontainer container kubepods-burstable-podc676114a2f00c154f031842422933bad.slice.
Feb 13 18:57:18.988921 kubelet[2947]: E0213 18:57:18.988493    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:18.993056 systemd[1]: Created slice kubepods-burstable-pod21072e7a19cd84b0dfff5cbbc2ff08ae.slice - libcontainer container kubepods-burstable-pod21072e7a19cd84b0dfff5cbbc2ff08ae.slice.
Feb 13 18:57:19.003104 kubelet[2947]: E0213 18:57:19.002914    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.005812 systemd[1]: Created slice kubepods-burstable-pod3829492a238f93c2cb1acc376dc82b2c.slice - libcontainer container kubepods-burstable-pod3829492a238f93c2cb1acc376dc82b2c.slice.
Feb 13 18:57:19.007497 kubelet[2947]: E0213 18:57:19.007471    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025735 kubelet[2947]: I0213 18:57:19.025686    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025735 kubelet[2947]: I0213 18:57:19.025735    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025845 kubelet[2947]: I0213 18:57:19.025755    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025845 kubelet[2947]: I0213 18:57:19.025771    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025845 kubelet[2947]: I0213 18:57:19.025787    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3829492a238f93c2cb1acc376dc82b2c-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-21f48afc48\" (UID: \"3829492a238f93c2cb1acc376dc82b2c\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025845 kubelet[2947]: I0213 18:57:19.025801    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21072e7a19cd84b0dfff5cbbc2ff08ae-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" (UID: \"21072e7a19cd84b0dfff5cbbc2ff08ae\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025845 kubelet[2947]: I0213 18:57:19.025815    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21072e7a19cd84b0dfff5cbbc2ff08ae-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" (UID: \"21072e7a19cd84b0dfff5cbbc2ff08ae\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025953 kubelet[2947]: I0213 18:57:19.025833    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21072e7a19cd84b0dfff5cbbc2ff08ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" (UID: \"21072e7a19cd84b0dfff5cbbc2ff08ae\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.025953 kubelet[2947]: I0213 18:57:19.025853    2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.101090 kubelet[2947]: I0213 18:57:19.100540    2947 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.101165 kubelet[2947]: E0213 18:57:19.101109    2947 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.225366 kubelet[2947]: E0213 18:57:19.225327    2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-21f48afc48?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="800ms"
Feb 13 18:57:19.290582 containerd[1761]: time="2025-02-13T18:57:19.290474057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-21f48afc48,Uid:c676114a2f00c154f031842422933bad,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:19.305027 containerd[1761]: time="2025-02-13T18:57:19.304971600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-21f48afc48,Uid:21072e7a19cd84b0dfff5cbbc2ff08ae,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:19.308985 containerd[1761]: time="2025-02-13T18:57:19.308650626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-21f48afc48,Uid:3829492a238f93c2cb1acc376dc82b2c,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:19.502933 kubelet[2947]: I0213 18:57:19.502823    2947 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.503392 kubelet[2947]: E0213 18:57:19.503363    2947 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:19.583623 kubelet[2947]: W0213 18:57:19.583519    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:19.583623 kubelet[2947]: E0213 18:57:19.583589    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:19.673720 kubelet[2947]: W0213 18:57:19.673600    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:19.673720 kubelet[2947]: E0213 18:57:19.673641    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:19.706096 kubelet[2947]: W0213 18:57:19.706038    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:19.706192 kubelet[2947]: E0213 18:57:19.706104    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:19.901933 kubelet[2947]: W0213 18:57:19.901872    2947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-21f48afc48&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused
Feb 13 18:57:19.901933 kubelet[2947]: E0213 18:57:19.901940    2947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-a-21f48afc48&limit=500&resourceVersion=0\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:19.977017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421681269.mount: Deactivated successfully.
Feb 13 18:57:20.003188 containerd[1761]: time="2025-02-13T18:57:20.003103519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:57:20.015752 containerd[1761]: time="2025-02-13T18:57:20.015679110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Feb 13 18:57:20.020992 containerd[1761]: time="2025-02-13T18:57:20.020932369Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:57:20.026223 kubelet[2947]: E0213 18:57:20.026185    2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-21f48afc48?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="1.6s"
Feb 13 18:57:20.028733 containerd[1761]: time="2025-02-13T18:57:20.028591979Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:57:20.035870 containerd[1761]: time="2025-02-13T18:57:20.035794990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 18:57:20.039202 containerd[1761]: time="2025-02-13T18:57:20.039153657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:57:20.041381 containerd[1761]: time="2025-02-13T18:57:20.041330048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 18:57:20.046045 containerd[1761]: time="2025-02-13T18:57:20.045317112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:57:20.046236 containerd[1761]: time="2025-02-13T18:57:20.046193789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 755.617812ms"
Feb 13 18:57:20.055581 containerd[1761]: time="2025-02-13T18:57:20.055520792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 750.432072ms"
Feb 13 18:57:20.071811 containerd[1761]: time="2025-02-13T18:57:20.071767008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 763.013823ms"
Feb 13 18:57:20.306449 kubelet[2947]: I0213 18:57:20.306009    2947 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:20.306449 kubelet[2947]: E0213 18:57:20.306338    2947 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:20.693758 kubelet[2947]: E0213 18:57:20.693715    2947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.41:6443: connect: connection refused" logger="UnhandledError"
Feb 13 18:57:21.305382 containerd[1761]: time="2025-02-13T18:57:21.305148623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:21.305382 containerd[1761]: time="2025-02-13T18:57:21.305207823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:21.305382 containerd[1761]: time="2025-02-13T18:57:21.305218543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:21.305789 containerd[1761]: time="2025-02-13T18:57:21.305386063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:21.308410 containerd[1761]: time="2025-02-13T18:57:21.308076903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:21.308789 containerd[1761]: time="2025-02-13T18:57:21.308245623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:21.308789 containerd[1761]: time="2025-02-13T18:57:21.308742183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:21.309343 containerd[1761]: time="2025-02-13T18:57:21.309292743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:21.320931 containerd[1761]: time="2025-02-13T18:57:21.320668662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:21.321155 containerd[1761]: time="2025-02-13T18:57:21.320841542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:21.321155 containerd[1761]: time="2025-02-13T18:57:21.320945622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:21.322646 containerd[1761]: time="2025-02-13T18:57:21.321919622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:21.348899 systemd[1]: Started cri-containerd-c64fbf1296e38b542559d58877c35807f8d27b3b52f5d6d855c9d4bd6f5e0ce2.scope - libcontainer container c64fbf1296e38b542559d58877c35807f8d27b3b52f5d6d855c9d4bd6f5e0ce2.
Feb 13 18:57:21.355383 systemd[1]: Started cri-containerd-3ad467f0e710fb52520e0984d6e034061148353508675f25638cf1f458f44b2d.scope - libcontainer container 3ad467f0e710fb52520e0984d6e034061148353508675f25638cf1f458f44b2d.
Feb 13 18:57:21.357824 systemd[1]: Started cri-containerd-e5af7d93c3afde904faa5986266d07b29cdd1b8f542ad25a0a54305a4eadc959.scope - libcontainer container e5af7d93c3afde904faa5986266d07b29cdd1b8f542ad25a0a54305a4eadc959.
Feb 13 18:57:21.409489 containerd[1761]: time="2025-02-13T18:57:21.409454454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-a-21f48afc48,Uid:c676114a2f00c154f031842422933bad,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64fbf1296e38b542559d58877c35807f8d27b3b52f5d6d855c9d4bd6f5e0ce2\""
Feb 13 18:57:21.413573 containerd[1761]: time="2025-02-13T18:57:21.413374774Z" level=info msg="CreateContainer within sandbox \"c64fbf1296e38b542559d58877c35807f8d27b3b52f5d6d855c9d4bd6f5e0ce2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 18:57:21.416864 containerd[1761]: time="2025-02-13T18:57:21.416476373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-a-21f48afc48,Uid:3829492a238f93c2cb1acc376dc82b2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ad467f0e710fb52520e0984d6e034061148353508675f25638cf1f458f44b2d\""
Feb 13 18:57:21.418973 containerd[1761]: time="2025-02-13T18:57:21.418909653Z" level=info msg="CreateContainer within sandbox \"3ad467f0e710fb52520e0984d6e034061148353508675f25638cf1f458f44b2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 18:57:21.421453 containerd[1761]: time="2025-02-13T18:57:21.421413773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-a-21f48afc48,Uid:21072e7a19cd84b0dfff5cbbc2ff08ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5af7d93c3afde904faa5986266d07b29cdd1b8f542ad25a0a54305a4eadc959\""
Feb 13 18:57:21.425561 containerd[1761]: time="2025-02-13T18:57:21.424895853Z" level=info msg="CreateContainer within sandbox \"e5af7d93c3afde904faa5986266d07b29cdd1b8f542ad25a0a54305a4eadc959\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 18:57:21.627072 kubelet[2947]: E0213 18:57:21.627022    2947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-a-21f48afc48?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="3.2s"
Feb 13 18:57:21.827711 containerd[1761]: time="2025-02-13T18:57:21.827648096Z" level=info msg="CreateContainer within sandbox \"3ad467f0e710fb52520e0984d6e034061148353508675f25638cf1f458f44b2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c7b9d5332c025c4241c0402a8626dbccd160ba560e9b00e1da8304f32b53e7f6\""
Feb 13 18:57:21.828311 containerd[1761]: time="2025-02-13T18:57:21.828276656Z" level=info msg="StartContainer for \"c7b9d5332c025c4241c0402a8626dbccd160ba560e9b00e1da8304f32b53e7f6\""
Feb 13 18:57:21.836060 containerd[1761]: time="2025-02-13T18:57:21.836012576Z" level=info msg="CreateContainer within sandbox \"e5af7d93c3afde904faa5986266d07b29cdd1b8f542ad25a0a54305a4eadc959\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a2d36fc6f8e6085107aefb21d26194fee11770c84304db09d235eef2f91b6fdb\""
Feb 13 18:57:21.837283 containerd[1761]: time="2025-02-13T18:57:21.837182736Z" level=info msg="StartContainer for \"a2d36fc6f8e6085107aefb21d26194fee11770c84304db09d235eef2f91b6fdb\""
Feb 13 18:57:21.847904 containerd[1761]: time="2025-02-13T18:57:21.847796135Z" level=info msg="CreateContainer within sandbox \"c64fbf1296e38b542559d58877c35807f8d27b3b52f5d6d855c9d4bd6f5e0ce2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"642feb0557db5cf140bf1883bb196502b38967f22b6b672e3360afea44349785\""
Feb 13 18:57:21.852421 containerd[1761]: time="2025-02-13T18:57:21.851508294Z" level=info msg="StartContainer for \"642feb0557db5cf140bf1883bb196502b38967f22b6b672e3360afea44349785\""
Feb 13 18:57:21.859918 systemd[1]: Started cri-containerd-c7b9d5332c025c4241c0402a8626dbccd160ba560e9b00e1da8304f32b53e7f6.scope - libcontainer container c7b9d5332c025c4241c0402a8626dbccd160ba560e9b00e1da8304f32b53e7f6.
Feb 13 18:57:21.889931 systemd[1]: Started cri-containerd-a2d36fc6f8e6085107aefb21d26194fee11770c84304db09d235eef2f91b6fdb.scope - libcontainer container a2d36fc6f8e6085107aefb21d26194fee11770c84304db09d235eef2f91b6fdb.
Feb 13 18:57:21.902002 systemd[1]: Started cri-containerd-642feb0557db5cf140bf1883bb196502b38967f22b6b672e3360afea44349785.scope - libcontainer container 642feb0557db5cf140bf1883bb196502b38967f22b6b672e3360afea44349785.
Feb 13 18:57:21.909311 containerd[1761]: time="2025-02-13T18:57:21.909257209Z" level=info msg="StartContainer for \"c7b9d5332c025c4241c0402a8626dbccd160ba560e9b00e1da8304f32b53e7f6\" returns successfully"
Feb 13 18:57:21.911555 kubelet[2947]: I0213 18:57:21.911433    2947 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:21.912644 kubelet[2947]: E0213 18:57:21.912585    2947 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:21.974649 containerd[1761]: time="2025-02-13T18:57:21.974603083Z" level=info msg="StartContainer for \"a2d36fc6f8e6085107aefb21d26194fee11770c84304db09d235eef2f91b6fdb\" returns successfully"
Feb 13 18:57:21.974980 containerd[1761]: time="2025-02-13T18:57:21.974824683Z" level=info msg="StartContainer for \"642feb0557db5cf140bf1883bb196502b38967f22b6b672e3360afea44349785\" returns successfully"
Feb 13 18:57:22.318926 systemd[1]: run-containerd-runc-k8s.io-e5af7d93c3afde904faa5986266d07b29cdd1b8f542ad25a0a54305a4eadc959-runc.3N2cgj.mount: Deactivated successfully.
Feb 13 18:57:22.689823 kubelet[2947]: E0213 18:57:22.689506    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:22.692797 kubelet[2947]: E0213 18:57:22.692456    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:22.696282 kubelet[2947]: E0213 18:57:22.696083    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:23.699726 kubelet[2947]: E0213 18:57:23.698907    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:23.699726 kubelet[2947]: E0213 18:57:23.699245    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:23.702074 kubelet[2947]: E0213 18:57:23.702044    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:24.323420 kubelet[2947]: E0213 18:57:24.323342    2947 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.1-a-21f48afc48" not found
Feb 13 18:57:24.614928 kubelet[2947]: I0213 18:57:24.613706    2947 apiserver.go:52] "Watching apiserver"
Feb 13 18:57:24.623087 kubelet[2947]: I0213 18:57:24.623036    2947 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 18:57:24.680954 kubelet[2947]: E0213 18:57:24.680807    2947 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.1-a-21f48afc48" not found
Feb 13 18:57:24.704211 kubelet[2947]: E0213 18:57:24.703732    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:24.704211 kubelet[2947]: E0213 18:57:24.704043    2947 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:24.831511 kubelet[2947]: E0213 18:57:24.831447    2947 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.1-a-21f48afc48\" not found" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:25.114893 kubelet[2947]: I0213 18:57:25.114515    2947 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:25.127676 kubelet[2947]: I0213 18:57:25.127276    2947 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:25.127676 kubelet[2947]: E0213 18:57:25.127321    2947 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4186.1.1-a-21f48afc48\": node \"ci-4186.1.1-a-21f48afc48\" not found"
Feb 13 18:57:25.224172 kubelet[2947]: I0213 18:57:25.224126    2947 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:25.240649 kubelet[2947]: W0213 18:57:25.240564    2947 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:25.240931 kubelet[2947]: I0213 18:57:25.240897    2947 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:25.249502 kubelet[2947]: W0213 18:57:25.249464    2947 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:25.249634 kubelet[2947]: I0213 18:57:25.249587    2947 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:25.254242 kubelet[2947]: W0213 18:57:25.254143    2947 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:25.899682 systemd[1]: Reloading requested from client PID 3220 ('systemctl') (unit session-9.scope)...
Feb 13 18:57:25.899710 systemd[1]: Reloading...
Feb 13 18:57:25.993774 zram_generator::config[3263]: No configuration found.
Feb 13 18:57:26.094729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:57:26.187628 systemd[1]: Reloading finished in 287 ms.
Feb 13 18:57:26.223625 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:57:26.248047 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 18:57:26.248308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:57:26.248368 systemd[1]: kubelet.service: Consumed 1.053s CPU time, 121.6M memory peak, 0B memory swap peak.
Feb 13 18:57:26.255590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:57:26.438939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:57:26.452168 (kubelet)[3324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 18:57:26.490174 kubelet[3324]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 18:57:26.490174 kubelet[3324]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI.
Feb 13 18:57:26.490174 kubelet[3324]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 18:57:26.490530 kubelet[3324]: I0213 18:57:26.490232    3324 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 18:57:26.500819 kubelet[3324]: I0213 18:57:26.500714    3324 server.go:520] "Kubelet version" kubeletVersion="v1.32.0"
Feb 13 18:57:26.500819 kubelet[3324]: I0213 18:57:26.500763    3324 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 18:57:26.501104 kubelet[3324]: I0213 18:57:26.501076    3324 server.go:954] "Client rotation is on, will bootstrap in background"
Feb 13 18:57:26.502940 kubelet[3324]: I0213 18:57:26.502854    3324 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 18:57:26.506597 kubelet[3324]: I0213 18:57:26.506553    3324 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 18:57:26.510029 kubelet[3324]: E0213 18:57:26.509980    3324 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 18:57:26.510029 kubelet[3324]: I0213 18:57:26.510026    3324 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 18:57:26.514202 kubelet[3324]: I0213 18:57:26.514163    3324 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 18:57:26.515450 kubelet[3324]: I0213 18:57:26.514404    3324 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 18:57:26.515450 kubelet[3324]: I0213 18:57:26.514441    3324 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-a-21f48afc48","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 18:57:26.515450 kubelet[3324]: I0213 18:57:26.514638    3324 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 18:57:26.515450 kubelet[3324]: I0213 18:57:26.514646    3324 container_manager_linux.go:304] "Creating device plugin manager"
Feb 13 18:57:26.516223 kubelet[3324]: I0213 18:57:26.514724    3324 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 18:57:26.516223 kubelet[3324]: I0213 18:57:26.514859    3324 kubelet.go:446] "Attempting to sync node with API server"
Feb 13 18:57:26.516223 kubelet[3324]: I0213 18:57:26.514875    3324 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 18:57:26.516223 kubelet[3324]: I0213 18:57:26.514893    3324 kubelet.go:352] "Adding apiserver pod source"
Feb 13 18:57:26.516223 kubelet[3324]: I0213 18:57:26.514905    3324 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 18:57:26.520009 kubelet[3324]: I0213 18:57:26.519977    3324 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 18:57:26.522818 kubelet[3324]: I0213 18:57:26.521792    3324 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 18:57:26.539435 kubelet[3324]: I0213 18:57:26.539386    3324 watchdog_linux.go:99] "Systemd watchdog is not enabled"
Feb 13 18:57:26.539435 kubelet[3324]: I0213 18:57:26.539435    3324 server.go:1287] "Started kubelet"
Feb 13 18:57:26.543429 kubelet[3324]: I0213 18:57:26.543391    3324 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 18:57:26.547655 kubelet[3324]: I0213 18:57:26.547591    3324 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 18:57:26.548575 kubelet[3324]: I0213 18:57:26.548542    3324 server.go:490] "Adding debug handlers to kubelet server"
Feb 13 18:57:26.550803 kubelet[3324]: I0213 18:57:26.550738    3324 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 18:57:26.551001 kubelet[3324]: I0213 18:57:26.550967    3324 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 18:57:26.552040 kubelet[3324]: I0213 18:57:26.551199    3324 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 18:57:26.554199 kubelet[3324]: I0213 18:57:26.552265    3324 volume_manager.go:297] "Starting Kubelet Volume Manager"
Feb 13 18:57:26.554199 kubelet[3324]: I0213 18:57:26.552363    3324 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 18:57:26.554199 kubelet[3324]: I0213 18:57:26.552494    3324 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 18:57:26.554199 kubelet[3324]: I0213 18:57:26.554018    3324 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 18:57:26.556756 kubelet[3324]: I0213 18:57:26.556730    3324 factory.go:221] Registration of the containerd container factory successfully
Feb 13 18:57:26.556874 kubelet[3324]: I0213 18:57:26.556865    3324 factory.go:221] Registration of the systemd container factory successfully
Feb 13 18:57:26.558482 kubelet[3324]: I0213 18:57:26.558437    3324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 18:57:26.559450 kubelet[3324]: I0213 18:57:26.559410    3324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 18:57:26.559450 kubelet[3324]: I0213 18:57:26.559445    3324 status_manager.go:227] "Starting to sync pod status with apiserver"
Feb 13 18:57:26.559546 kubelet[3324]: I0213 18:57:26.559487    3324 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Feb 13 18:57:26.559546 kubelet[3324]: I0213 18:57:26.559493    3324 kubelet.go:2388] "Starting kubelet main sync loop"
Feb 13 18:57:26.559546 kubelet[3324]: E0213 18:57:26.559533    3324 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 18:57:26.610049 kubelet[3324]: I0213 18:57:26.610015    3324 cpu_manager.go:221] "Starting CPU manager" policy="none"
Feb 13 18:57:26.610049 kubelet[3324]: I0213 18:57:26.610037    3324 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
Feb 13 18:57:26.610049 kubelet[3324]: I0213 18:57:26.610062    3324 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 18:57:26.610292 kubelet[3324]: I0213 18:57:26.610268    3324 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 18:57:26.610325 kubelet[3324]: I0213 18:57:26.610286    3324 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 18:57:26.610325 kubelet[3324]: I0213 18:57:26.610307    3324 policy_none.go:49] "None policy: Start"
Feb 13 18:57:26.610325 kubelet[3324]: I0213 18:57:26.610315    3324 memory_manager.go:186] "Starting memorymanager" policy="None"
Feb 13 18:57:26.610325 kubelet[3324]: I0213 18:57:26.610324    3324 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 18:57:26.610447 kubelet[3324]: I0213 18:57:26.610428    3324 state_mem.go:75] "Updated machine memory state"
Feb 13 18:57:26.614223 kubelet[3324]: I0213 18:57:26.614199    3324 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 18:57:26.615065 kubelet[3324]: I0213 18:57:26.614826    3324 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 18:57:26.615065 kubelet[3324]: I0213 18:57:26.614840    3324 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 18:57:26.615184 kubelet[3324]: I0213 18:57:26.615110    3324 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 18:57:26.617469 kubelet[3324]: E0213 18:57:26.617449    3324 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
Feb 13 18:57:26.660364 kubelet[3324]: I0213 18:57:26.660186    3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.660364 kubelet[3324]: I0213 18:57:26.660305    3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.660844 kubelet[3324]: I0213 18:57:26.660186    3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.670667 kubelet[3324]: W0213 18:57:26.670503    3324 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:26.670667 kubelet[3324]: E0213 18:57:26.670584    3324 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.1-a-21f48afc48\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.671525 kubelet[3324]: W0213 18:57:26.671493    3324 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:26.671525 kubelet[3324]: W0213 18:57:26.671529    3324 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:26.671624 kubelet[3324]: E0213 18:57:26.671559    3324 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.671655 kubelet[3324]: E0213 18:57:26.671625    3324 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.726376 kubelet[3324]: I0213 18:57:26.726277    3324 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.737305 kubelet[3324]: I0213 18:57:26.737266    3324 kubelet_node_status.go:125] "Node was previously registered" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.737436 kubelet[3324]: I0213 18:57:26.737357    3324 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.854383 kubelet[3324]: I0213 18:57:26.854278    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3829492a238f93c2cb1acc376dc82b2c-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-a-21f48afc48\" (UID: \"3829492a238f93c2cb1acc376dc82b2c\") " pod="kube-system/kube-scheduler-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.854383 kubelet[3324]: I0213 18:57:26.854346    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21072e7a19cd84b0dfff5cbbc2ff08ae-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" (UID: \"21072e7a19cd84b0dfff5cbbc2ff08ae\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.854383 kubelet[3324]: I0213 18:57:26.854374    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21072e7a19cd84b0dfff5cbbc2ff08ae-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" (UID: \"21072e7a19cd84b0dfff5cbbc2ff08ae\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.855001 kubelet[3324]: I0213 18:57:26.854417    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21072e7a19cd84b0dfff5cbbc2ff08ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" (UID: \"21072e7a19cd84b0dfff5cbbc2ff08ae\") " pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.855001 kubelet[3324]: I0213 18:57:26.854441    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.855001 kubelet[3324]: I0213 18:57:26.854458    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.855001 kubelet[3324]: I0213 18:57:26.854476    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.855001 kubelet[3324]: I0213 18:57:26.854506    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.855119 kubelet[3324]: I0213 18:57:26.854521    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c676114a2f00c154f031842422933bad-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" (UID: \"c676114a2f00c154f031842422933bad\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:26.918680 sudo[3358]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb 13 18:57:26.919275 sudo[3358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)
Feb 13 18:57:27.393034 sudo[3358]: pam_unix(sudo:session): session closed for user root
Feb 13 18:57:27.516396 kubelet[3324]: I0213 18:57:27.516356    3324 apiserver.go:52] "Watching apiserver"
Feb 13 18:57:27.553385 kubelet[3324]: I0213 18:57:27.553345    3324 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 18:57:27.595173 kubelet[3324]: I0213 18:57:27.594231    3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:27.595173 kubelet[3324]: I0213 18:57:27.594643    3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:27.608181 kubelet[3324]: W0213 18:57:27.608156    3324 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:27.608624 kubelet[3324]: W0213 18:57:27.608610    3324 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb 13 18:57:27.609116 kubelet[3324]: E0213 18:57:27.608852    3324 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186.1.1-a-21f48afc48\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:27.609116 kubelet[3324]: E0213 18:57:27.608802    3324 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.1-a-21f48afc48\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48"
Feb 13 18:57:27.636317 kubelet[3324]: I0213 18:57:27.636171    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.1-a-21f48afc48" podStartSLOduration=2.636150358 podStartE2EDuration="2.636150358s" podCreationTimestamp="2025-02-13 18:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:57:27.624578837 +0000 UTC m=+1.168910848" watchObservedRunningTime="2025-02-13 18:57:27.636150358 +0000 UTC m=+1.180482369"
Feb 13 18:57:27.657589 kubelet[3324]: I0213 18:57:27.656986    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.1-a-21f48afc48" podStartSLOduration=2.65696904 podStartE2EDuration="2.65696904s" podCreationTimestamp="2025-02-13 18:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:57:27.637047398 +0000 UTC m=+1.181379409" watchObservedRunningTime="2025-02-13 18:57:27.65696904 +0000 UTC m=+1.201301051"
Feb 13 18:57:27.679608 kubelet[3324]: I0213 18:57:27.679435    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.1-a-21f48afc48" podStartSLOduration=2.679418003 podStartE2EDuration="2.679418003s" podCreationTimestamp="2025-02-13 18:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:57:27.660261801 +0000 UTC m=+1.204593812" watchObservedRunningTime="2025-02-13 18:57:27.679418003 +0000 UTC m=+1.223750014"
Feb 13 18:57:28.922102 sudo[2394]: pam_unix(sudo:session): session closed for user root
Feb 13 18:57:28.996330 sshd[2393]: Connection closed by 10.200.16.10 port 55658
Feb 13 18:57:28.996233 sshd-session[2391]: pam_unix(sshd:session): session closed for user core
Feb 13 18:57:28.999218 systemd-logind[1729]: Session 9 logged out. Waiting for processes to exit.
Feb 13 18:57:28.999480 systemd[1]: sshd@6-10.200.20.41:22-10.200.16.10:55658.service: Deactivated successfully.
Feb 13 18:57:29.001923 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 18:57:29.002157 systemd[1]: session-9.scope: Consumed 7.316s CPU time, 150.5M memory peak, 0B memory swap peak.
Feb 13 18:57:29.003920 systemd-logind[1729]: Removed session 9.
Feb 13 18:57:31.498352 kubelet[3324]: I0213 18:57:31.498266    3324 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 18:57:31.498827 containerd[1761]: time="2025-02-13T18:57:31.498628897Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 18:57:31.499013 kubelet[3324]: I0213 18:57:31.498872    3324 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 18:57:32.386125 systemd[1]: Created slice kubepods-besteffort-pode40d0653_76f2_4221_a20b_6b4684af310f.slice - libcontainer container kubepods-besteffort-pode40d0653_76f2_4221_a20b_6b4684af310f.slice.
Feb 13 18:57:32.393749 kubelet[3324]: I0213 18:57:32.393679    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hostproc\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.393948 kubelet[3324]: I0213 18:57:32.393932    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47tx6\" (UniqueName: \"kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-kube-api-access-47tx6\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394025 kubelet[3324]: I0213 18:57:32.394014    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-lib-modules\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394096 kubelet[3324]: I0213 18:57:32.394084    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-net\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394716 kubelet[3324]: I0213 18:57:32.394157    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-cgroup\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394716 kubelet[3324]: I0213 18:57:32.394179    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d99ef4a3-aa74-44c9-b6e8-9df48433774c-clustermesh-secrets\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394716 kubelet[3324]: I0213 18:57:32.394202    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e40d0653-76f2-4221-a20b-6b4684af310f-kube-proxy\") pod \"kube-proxy-9tzxt\" (UID: \"e40d0653-76f2-4221-a20b-6b4684af310f\") " pod="kube-system/kube-proxy-9tzxt"
Feb 13 18:57:32.394716 kubelet[3324]: I0213 18:57:32.394231    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jtr\" (UniqueName: \"kubernetes.io/projected/e40d0653-76f2-4221-a20b-6b4684af310f-kube-api-access-b5jtr\") pod \"kube-proxy-9tzxt\" (UID: \"e40d0653-76f2-4221-a20b-6b4684af310f\") " pod="kube-system/kube-proxy-9tzxt"
Feb 13 18:57:32.394716 kubelet[3324]: I0213 18:57:32.394253    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-bpf-maps\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394872 kubelet[3324]: I0213 18:57:32.394269    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-kernel\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394872 kubelet[3324]: I0213 18:57:32.394287    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cni-path\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394872 kubelet[3324]: I0213 18:57:32.394330    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-xtables-lock\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394872 kubelet[3324]: I0213 18:57:32.394351    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e40d0653-76f2-4221-a20b-6b4684af310f-lib-modules\") pod \"kube-proxy-9tzxt\" (UID: \"e40d0653-76f2-4221-a20b-6b4684af310f\") " pod="kube-system/kube-proxy-9tzxt"
Feb 13 18:57:32.394872 kubelet[3324]: I0213 18:57:32.394372    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-run\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394872 kubelet[3324]: I0213 18:57:32.394389    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-etc-cni-netd\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394991 kubelet[3324]: I0213 18:57:32.394407    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-config-path\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394991 kubelet[3324]: I0213 18:57:32.394427    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hubble-tls\") pod \"cilium-4xvt6\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") " pod="kube-system/cilium-4xvt6"
Feb 13 18:57:32.394991 kubelet[3324]: I0213 18:57:32.394451    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e40d0653-76f2-4221-a20b-6b4684af310f-xtables-lock\") pod \"kube-proxy-9tzxt\" (UID: \"e40d0653-76f2-4221-a20b-6b4684af310f\") " pod="kube-system/kube-proxy-9tzxt"
Feb 13 18:57:32.412352 systemd[1]: Created slice kubepods-burstable-podd99ef4a3_aa74_44c9_b6e8_9df48433774c.slice - libcontainer container kubepods-burstable-podd99ef4a3_aa74_44c9_b6e8_9df48433774c.slice.
Feb 13 18:57:32.542998 kubelet[3324]: I0213 18:57:32.542662    3324 status_manager.go:890] "Failed to get status for pod" podUID="ee6beb99-56c1-4c5b-8b0d-7ada8e046484" pod="kube-system/cilium-operator-6c4d7847fc-csxsm" err="pods \"cilium-operator-6c4d7847fc-csxsm\" is forbidden: User \"system:node:ci-4186.1.1-a-21f48afc48\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.1-a-21f48afc48' and this object"
Feb 13 18:57:32.553080 systemd[1]: Created slice kubepods-besteffort-podee6beb99_56c1_4c5b_8b0d_7ada8e046484.slice - libcontainer container kubepods-besteffort-podee6beb99_56c1_4c5b_8b0d_7ada8e046484.slice.
Feb 13 18:57:32.595670 kubelet[3324]: I0213 18:57:32.595621    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh72b\" (UniqueName: \"kubernetes.io/projected/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-kube-api-access-wh72b\") pod \"cilium-operator-6c4d7847fc-csxsm\" (UID: \"ee6beb99-56c1-4c5b-8b0d-7ada8e046484\") " pod="kube-system/cilium-operator-6c4d7847fc-csxsm"
Feb 13 18:57:32.595670 kubelet[3324]: I0213 18:57:32.595678    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-csxsm\" (UID: \"ee6beb99-56c1-4c5b-8b0d-7ada8e046484\") " pod="kube-system/cilium-operator-6c4d7847fc-csxsm"
Feb 13 18:57:32.707784 containerd[1761]: time="2025-02-13T18:57:32.705412710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9tzxt,Uid:e40d0653-76f2-4221-a20b-6b4684af310f,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:32.716088 containerd[1761]: time="2025-02-13T18:57:32.716044996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xvt6,Uid:d99ef4a3-aa74-44c9-b6e8-9df48433774c,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:32.752769 containerd[1761]: time="2025-02-13T18:57:32.752607454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:32.752769 containerd[1761]: time="2025-02-13T18:57:32.752663574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:32.752769 containerd[1761]: time="2025-02-13T18:57:32.752674934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:32.753767 containerd[1761]: time="2025-02-13T18:57:32.752974855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:32.767902 systemd[1]: Started cri-containerd-d61ee3c26999701212f370ec540b7680f16e27aa1f84ccc8ac8873774f4fff43.scope - libcontainer container d61ee3c26999701212f370ec540b7680f16e27aa1f84ccc8ac8873774f4fff43.
Feb 13 18:57:32.777781 containerd[1761]: time="2025-02-13T18:57:32.777037067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:32.778710 containerd[1761]: time="2025-02-13T18:57:32.778483708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:32.778710 containerd[1761]: time="2025-02-13T18:57:32.778515548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:32.778710 containerd[1761]: time="2025-02-13T18:57:32.778600708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:32.801006 systemd[1]: Started cri-containerd-d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503.scope - libcontainer container d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503.
Feb 13 18:57:32.802624 containerd[1761]: time="2025-02-13T18:57:32.802374960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9tzxt,Uid:e40d0653-76f2-4221-a20b-6b4684af310f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61ee3c26999701212f370ec540b7680f16e27aa1f84ccc8ac8873774f4fff43\""
Feb 13 18:57:32.806882 containerd[1761]: time="2025-02-13T18:57:32.806833322Z" level=info msg="CreateContainer within sandbox \"d61ee3c26999701212f370ec540b7680f16e27aa1f84ccc8ac8873774f4fff43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 18:57:32.831781 containerd[1761]: time="2025-02-13T18:57:32.831733295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xvt6,Uid:d99ef4a3-aa74-44c9-b6e8-9df48433774c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\""
Feb 13 18:57:32.833928 containerd[1761]: time="2025-02-13T18:57:32.833892776Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 18:57:32.855670 containerd[1761]: time="2025-02-13T18:57:32.855588587Z" level=info msg="CreateContainer within sandbox \"d61ee3c26999701212f370ec540b7680f16e27aa1f84ccc8ac8873774f4fff43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a583fd43dd91f28ce70e1536dcfc2597a75ac9ca13c8660abc7b8e0a6ab506b8\""
Feb 13 18:57:32.856332 containerd[1761]: time="2025-02-13T18:57:32.856182388Z" level=info msg="StartContainer for \"a583fd43dd91f28ce70e1536dcfc2597a75ac9ca13c8660abc7b8e0a6ab506b8\""
Feb 13 18:57:32.858278 containerd[1761]: time="2025-02-13T18:57:32.858225909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-csxsm,Uid:ee6beb99-56c1-4c5b-8b0d-7ada8e046484,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:32.885936 systemd[1]: Started cri-containerd-a583fd43dd91f28ce70e1536dcfc2597a75ac9ca13c8660abc7b8e0a6ab506b8.scope - libcontainer container a583fd43dd91f28ce70e1536dcfc2597a75ac9ca13c8660abc7b8e0a6ab506b8.
Feb 13 18:57:32.915220 containerd[1761]: time="2025-02-13T18:57:32.915123418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:32.915572 containerd[1761]: time="2025-02-13T18:57:32.915400378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:32.915572 containerd[1761]: time="2025-02-13T18:57:32.915416578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:32.916589 containerd[1761]: time="2025-02-13T18:57:32.916432499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:32.931944 containerd[1761]: time="2025-02-13T18:57:32.931905147Z" level=info msg="StartContainer for \"a583fd43dd91f28ce70e1536dcfc2597a75ac9ca13c8660abc7b8e0a6ab506b8\" returns successfully"
Feb 13 18:57:32.939369 systemd[1]: Started cri-containerd-ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754.scope - libcontainer container ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754.
Feb 13 18:57:32.982142 containerd[1761]: time="2025-02-13T18:57:32.982021813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-csxsm,Uid:ee6beb99-56c1-4c5b-8b0d-7ada8e046484,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754\""
Feb 13 18:57:33.621460 kubelet[3324]: I0213 18:57:33.621063    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9tzxt" podStartSLOduration=1.621046782 podStartE2EDuration="1.621046782s" podCreationTimestamp="2025-02-13 18:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:57:33.620637261 +0000 UTC m=+7.164969272" watchObservedRunningTime="2025-02-13 18:57:33.621046782 +0000 UTC m=+7.165378793"
Feb 13 18:57:36.835394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055325713.mount: Deactivated successfully.
Feb 13 18:57:39.288719 containerd[1761]: time="2025-02-13T18:57:39.288068780Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:39.291365 containerd[1761]: time="2025-02-13T18:57:39.291315422Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710"
Feb 13 18:57:39.295608 containerd[1761]: time="2025-02-13T18:57:39.295577424Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:39.296992 containerd[1761]: time="2025-02-13T18:57:39.296853905Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.462683409s"
Feb 13 18:57:39.296992 containerd[1761]: time="2025-02-13T18:57:39.296890785Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Feb 13 18:57:39.298597 containerd[1761]: time="2025-02-13T18:57:39.298453786Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 18:57:39.300361 containerd[1761]: time="2025-02-13T18:57:39.300320547Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 18:57:39.339010 containerd[1761]: time="2025-02-13T18:57:39.338956527Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\""
Feb 13 18:57:39.340129 containerd[1761]: time="2025-02-13T18:57:39.340068487Z" level=info msg="StartContainer for \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\""
Feb 13 18:57:39.372885 systemd[1]: Started cri-containerd-ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae.scope - libcontainer container ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae.
Feb 13 18:57:39.400646 containerd[1761]: time="2025-02-13T18:57:39.400522038Z" level=info msg="StartContainer for \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\" returns successfully"
Feb 13 18:57:39.409232 systemd[1]: cri-containerd-ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae.scope: Deactivated successfully.
Feb 13 18:57:40.326002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae-rootfs.mount: Deactivated successfully.
Feb 13 18:57:40.515756 containerd[1761]: time="2025-02-13T18:57:40.515671611Z" level=info msg="shim disconnected" id=ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae namespace=k8s.io
Feb 13 18:57:40.515756 containerd[1761]: time="2025-02-13T18:57:40.515748091Z" level=warning msg="cleaning up after shim disconnected" id=ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae namespace=k8s.io
Feb 13 18:57:40.515756 containerd[1761]: time="2025-02-13T18:57:40.515756291Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 18:57:40.635623 containerd[1761]: time="2025-02-13T18:57:40.635458525Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 18:57:40.671381 containerd[1761]: time="2025-02-13T18:57:40.671330163Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\""
Feb 13 18:57:40.672517 containerd[1761]: time="2025-02-13T18:57:40.672469963Z" level=info msg="StartContainer for \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\""
Feb 13 18:57:40.701905 systemd[1]: Started cri-containerd-a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157.scope - libcontainer container a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157.
Feb 13 18:57:40.727819 containerd[1761]: time="2025-02-13T18:57:40.727771401Z" level=info msg="StartContainer for \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\" returns successfully"
Feb 13 18:57:40.738391 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 18:57:40.738608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:57:40.738677 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:57:40.746185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:57:40.746376 systemd[1]: cri-containerd-a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157.scope: Deactivated successfully.
Feb 13 18:57:40.766016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157-rootfs.mount: Deactivated successfully.
Feb 13 18:57:40.767921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:57:40.780715 containerd[1761]: time="2025-02-13T18:57:40.780636718Z" level=info msg="shim disconnected" id=a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157 namespace=k8s.io
Feb 13 18:57:40.780715 containerd[1761]: time="2025-02-13T18:57:40.780708798Z" level=warning msg="cleaning up after shim disconnected" id=a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157 namespace=k8s.io
Feb 13 18:57:40.780715 containerd[1761]: time="2025-02-13T18:57:40.780717838Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 18:57:41.581194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166219112.mount: Deactivated successfully.
Feb 13 18:57:41.639656 containerd[1761]: time="2025-02-13T18:57:41.639512956Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 18:57:41.684537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260721186.mount: Deactivated successfully.
Feb 13 18:57:41.705257 containerd[1761]: time="2025-02-13T18:57:41.705198673Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\""
Feb 13 18:57:41.707055 containerd[1761]: time="2025-02-13T18:57:41.707012153Z" level=info msg="StartContainer for \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\""
Feb 13 18:57:41.758898 systemd[1]: Started cri-containerd-36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8.scope - libcontainer container 36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8.
Feb 13 18:57:41.794182 systemd[1]: cri-containerd-36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8.scope: Deactivated successfully.
Feb 13 18:57:41.796772 containerd[1761]: time="2025-02-13T18:57:41.795974229Z" level=info msg="StartContainer for \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\" returns successfully"
Feb 13 18:57:41.905178 containerd[1761]: time="2025-02-13T18:57:41.904917663Z" level=info msg="shim disconnected" id=36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8 namespace=k8s.io
Feb 13 18:57:41.905178 containerd[1761]: time="2025-02-13T18:57:41.904975063Z" level=warning msg="cleaning up after shim disconnected" id=36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8 namespace=k8s.io
Feb 13 18:57:41.905178 containerd[1761]: time="2025-02-13T18:57:41.904983023Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 18:57:42.168646 containerd[1761]: time="2025-02-13T18:57:42.167659291Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:42.171770 containerd[1761]: time="2025-02-13T18:57:42.171701251Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306"
Feb 13 18:57:42.177713 containerd[1761]: time="2025-02-13T18:57:42.176315370Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:57:42.177713 containerd[1761]: time="2025-02-13T18:57:42.177604930Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.879119504s"
Feb 13 18:57:42.177713 containerd[1761]: time="2025-02-13T18:57:42.177635250Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Feb 13 18:57:42.180231 containerd[1761]: time="2025-02-13T18:57:42.180187610Z" level=info msg="CreateContainer within sandbox \"ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 18:57:42.214977 containerd[1761]: time="2025-02-13T18:57:42.214928808Z" level=info msg="CreateContainer within sandbox \"ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\""
Feb 13 18:57:42.216345 containerd[1761]: time="2025-02-13T18:57:42.215429688Z" level=info msg="StartContainer for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\""
Feb 13 18:57:42.235867 systemd[1]: Started cri-containerd-ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f.scope - libcontainer container ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f.
Feb 13 18:57:42.266808 containerd[1761]: time="2025-02-13T18:57:42.266761126Z" level=info msg="StartContainer for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" returns successfully"
Feb 13 18:57:42.579033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8-rootfs.mount: Deactivated successfully.
Feb 13 18:57:42.650741 containerd[1761]: time="2025-02-13T18:57:42.650669387Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 18:57:42.666587 kubelet[3324]: I0213 18:57:42.666512    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-csxsm" podStartSLOduration=1.473918311 podStartE2EDuration="10.666493146s" podCreationTimestamp="2025-02-13 18:57:32 +0000 UTC" firstStartedPulling="2025-02-13 18:57:32.986156335 +0000 UTC m=+6.530488346" lastFinishedPulling="2025-02-13 18:57:42.17873117 +0000 UTC m=+15.723063181" observedRunningTime="2025-02-13 18:57:42.664236427 +0000 UTC m=+16.208568438" watchObservedRunningTime="2025-02-13 18:57:42.666493146 +0000 UTC m=+16.210825157"
Feb 13 18:57:42.686653 containerd[1761]: time="2025-02-13T18:57:42.686585626Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\""
Feb 13 18:57:42.687464 containerd[1761]: time="2025-02-13T18:57:42.687426785Z" level=info msg="StartContainer for \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\""
Feb 13 18:57:42.734925 systemd[1]: Started cri-containerd-35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c.scope - libcontainer container 35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c.
Feb 13 18:57:42.789939 systemd[1]: cri-containerd-35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c.scope: Deactivated successfully.
Feb 13 18:57:42.791097 containerd[1761]: time="2025-02-13T18:57:42.791049420Z" level=info msg="StartContainer for \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\" returns successfully"
Feb 13 18:57:42.821952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c-rootfs.mount: Deactivated successfully.
Feb 13 18:57:43.034854 containerd[1761]: time="2025-02-13T18:57:43.034680369Z" level=info msg="shim disconnected" id=35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c namespace=k8s.io
Feb 13 18:57:43.034854 containerd[1761]: time="2025-02-13T18:57:43.034763689Z" level=warning msg="cleaning up after shim disconnected" id=35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c namespace=k8s.io
Feb 13 18:57:43.034854 containerd[1761]: time="2025-02-13T18:57:43.034783209Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 18:57:43.653840 containerd[1761]: time="2025-02-13T18:57:43.653792259Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 18:57:43.699546 containerd[1761]: time="2025-02-13T18:57:43.699403256Z" level=info msg="CreateContainer within sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\""
Feb 13 18:57:43.700324 containerd[1761]: time="2025-02-13T18:57:43.700297176Z" level=info msg="StartContainer for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\""
Feb 13 18:57:43.727588 systemd[1]: run-containerd-runc-k8s.io-b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a-runc.Kh4hLc.mount: Deactivated successfully.
Feb 13 18:57:43.736893 systemd[1]: Started cri-containerd-b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a.scope - libcontainer container b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a.
Feb 13 18:57:43.771558 containerd[1761]: time="2025-02-13T18:57:43.771505973Z" level=info msg="StartContainer for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" returns successfully"
Feb 13 18:57:43.838344 kubelet[3324]: I0213 18:57:43.837412    3324 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
Feb 13 18:57:43.883666 systemd[1]: Created slice kubepods-burstable-pod3ef825ed_593d_42d0_b034_3fb8ef73ea34.slice - libcontainer container kubepods-burstable-pod3ef825ed_593d_42d0_b034_3fb8ef73ea34.slice.
Feb 13 18:57:43.893275 systemd[1]: Created slice kubepods-burstable-pode3d919a5_c0e7_4854_a816_6f6c9ffa332a.slice - libcontainer container kubepods-burstable-pode3d919a5_c0e7_4854_a816_6f6c9ffa332a.slice.
Feb 13 18:57:43.973302 kubelet[3324]: I0213 18:57:43.973129    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3d919a5-c0e7-4854-a816-6f6c9ffa332a-config-volume\") pod \"coredns-668d6bf9bc-sfnj8\" (UID: \"e3d919a5-c0e7-4854-a816-6f6c9ffa332a\") " pod="kube-system/coredns-668d6bf9bc-sfnj8"
Feb 13 18:57:43.973302 kubelet[3324]: I0213 18:57:43.973175    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sq2j\" (UniqueName: \"kubernetes.io/projected/e3d919a5-c0e7-4854-a816-6f6c9ffa332a-kube-api-access-6sq2j\") pod \"coredns-668d6bf9bc-sfnj8\" (UID: \"e3d919a5-c0e7-4854-a816-6f6c9ffa332a\") " pod="kube-system/coredns-668d6bf9bc-sfnj8"
Feb 13 18:57:43.973302 kubelet[3324]: I0213 18:57:43.973199    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjp8\" (UniqueName: \"kubernetes.io/projected/3ef825ed-593d-42d0-b034-3fb8ef73ea34-kube-api-access-prjp8\") pod \"coredns-668d6bf9bc-pj9n2\" (UID: \"3ef825ed-593d-42d0-b034-3fb8ef73ea34\") " pod="kube-system/coredns-668d6bf9bc-pj9n2"
Feb 13 18:57:43.973302 kubelet[3324]: I0213 18:57:43.973221    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef825ed-593d-42d0-b034-3fb8ef73ea34-config-volume\") pod \"coredns-668d6bf9bc-pj9n2\" (UID: \"3ef825ed-593d-42d0-b034-3fb8ef73ea34\") " pod="kube-system/coredns-668d6bf9bc-pj9n2"
Feb 13 18:57:44.190874 containerd[1761]: time="2025-02-13T18:57:44.190817792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pj9n2,Uid:3ef825ed-593d-42d0-b034-3fb8ef73ea34,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:44.196844 containerd[1761]: time="2025-02-13T18:57:44.196577912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sfnj8,Uid:e3d919a5-c0e7-4854-a816-6f6c9ffa332a,Namespace:kube-system,Attempt:0,}"
Feb 13 18:57:46.014158 systemd-networkd[1337]: cilium_host: Link UP
Feb 13 18:57:46.014288 systemd-networkd[1337]: cilium_net: Link UP
Feb 13 18:57:46.014435 systemd-networkd[1337]: cilium_net: Gained carrier
Feb 13 18:57:46.014581 systemd-networkd[1337]: cilium_host: Gained carrier
Feb 13 18:57:46.014677 systemd-networkd[1337]: cilium_net: Gained IPv6LL
Feb 13 18:57:46.014848 systemd-networkd[1337]: cilium_host: Gained IPv6LL
Feb 13 18:57:46.259900 systemd-networkd[1337]: cilium_vxlan: Link UP
Feb 13 18:57:46.259907 systemd-networkd[1337]: cilium_vxlan: Gained carrier
Feb 13 18:57:46.603725 kernel: NET: Registered PF_ALG protocol family
Feb 13 18:57:47.388156 systemd-networkd[1337]: lxc_health: Link UP
Feb 13 18:57:47.398914 systemd-networkd[1337]: lxc_health: Gained carrier
Feb 13 18:57:47.528838 systemd-networkd[1337]: cilium_vxlan: Gained IPv6LL
Feb 13 18:57:47.777477 systemd-networkd[1337]: lxc6e4ef79da720: Link UP
Feb 13 18:57:47.792783 kernel: eth0: renamed from tmp24b68
Feb 13 18:57:47.797184 systemd-networkd[1337]: lxc6e4ef79da720: Gained carrier
Feb 13 18:57:47.800943 systemd-networkd[1337]: lxca2f556866363: Link UP
Feb 13 18:57:47.816722 kernel: eth0: renamed from tmp00989
Feb 13 18:57:47.826594 systemd-networkd[1337]: lxca2f556866363: Gained carrier
Feb 13 18:57:48.740533 kubelet[3324]: I0213 18:57:48.740010    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4xvt6" podStartSLOduration=10.275251361 podStartE2EDuration="16.739990891s" podCreationTimestamp="2025-02-13 18:57:32 +0000 UTC" firstStartedPulling="2025-02-13 18:57:32.833414016 +0000 UTC m=+6.377745987" lastFinishedPulling="2025-02-13 18:57:39.298153506 +0000 UTC m=+12.842485517" observedRunningTime="2025-02-13 18:57:44.680233169 +0000 UTC m=+18.224565180" watchObservedRunningTime="2025-02-13 18:57:48.739990891 +0000 UTC m=+22.284322902"
Feb 13 18:57:48.744928 systemd-networkd[1337]: lxc_health: Gained IPv6LL
Feb 13 18:57:49.640838 systemd-networkd[1337]: lxc6e4ef79da720: Gained IPv6LL
Feb 13 18:57:49.641116 systemd-networkd[1337]: lxca2f556866363: Gained IPv6LL
Feb 13 18:57:51.457022 containerd[1761]: time="2025-02-13T18:57:51.456844709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:51.457022 containerd[1761]: time="2025-02-13T18:57:51.456913429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:51.457022 containerd[1761]: time="2025-02-13T18:57:51.456926189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:51.461796 containerd[1761]: time="2025-02-13T18:57:51.457028909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:51.475720 containerd[1761]: time="2025-02-13T18:57:51.474407108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:57:51.475720 containerd[1761]: time="2025-02-13T18:57:51.474468388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:57:51.475720 containerd[1761]: time="2025-02-13T18:57:51.474483348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:51.475720 containerd[1761]: time="2025-02-13T18:57:51.474552668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:57:51.496159 systemd[1]: Started cri-containerd-24b680893c7620ef7d48c8f67d266e74d4cd2edb2807e79558a4668f58b863b8.scope - libcontainer container 24b680893c7620ef7d48c8f67d266e74d4cd2edb2807e79558a4668f58b863b8.
Feb 13 18:57:51.511979 systemd[1]: Started cri-containerd-00989809dcee0113a8b3949e73c770ac552fc487ebeea545f7a9ebc4bfd4d944.scope - libcontainer container 00989809dcee0113a8b3949e73c770ac552fc487ebeea545f7a9ebc4bfd4d944.
Feb 13 18:57:51.546288 containerd[1761]: time="2025-02-13T18:57:51.546223704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pj9n2,Uid:3ef825ed-593d-42d0-b034-3fb8ef73ea34,Namespace:kube-system,Attempt:0,} returns sandbox id \"24b680893c7620ef7d48c8f67d266e74d4cd2edb2807e79558a4668f58b863b8\""
Feb 13 18:57:51.551189 containerd[1761]: time="2025-02-13T18:57:51.551106624Z" level=info msg="CreateContainer within sandbox \"24b680893c7620ef7d48c8f67d266e74d4cd2edb2807e79558a4668f58b863b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 18:57:51.566258 containerd[1761]: time="2025-02-13T18:57:51.566204663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sfnj8,Uid:e3d919a5-c0e7-4854-a816-6f6c9ffa332a,Namespace:kube-system,Attempt:0,} returns sandbox id \"00989809dcee0113a8b3949e73c770ac552fc487ebeea545f7a9ebc4bfd4d944\""
Feb 13 18:57:51.573750 containerd[1761]: time="2025-02-13T18:57:51.572994063Z" level=info msg="CreateContainer within sandbox \"00989809dcee0113a8b3949e73c770ac552fc487ebeea545f7a9ebc4bfd4d944\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 18:57:51.599019 containerd[1761]: time="2025-02-13T18:57:51.598965941Z" level=info msg="CreateContainer within sandbox \"24b680893c7620ef7d48c8f67d266e74d4cd2edb2807e79558a4668f58b863b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c6f9d2c784d05c0d2226d05e3b6c4d02f8a92dc8017efa4d41309d6eda25eba\""
Feb 13 18:57:51.599613 containerd[1761]: time="2025-02-13T18:57:51.599455661Z" level=info msg="StartContainer for \"7c6f9d2c784d05c0d2226d05e3b6c4d02f8a92dc8017efa4d41309d6eda25eba\""
Feb 13 18:57:51.619065 containerd[1761]: time="2025-02-13T18:57:51.618505540Z" level=info msg="CreateContainer within sandbox \"00989809dcee0113a8b3949e73c770ac552fc487ebeea545f7a9ebc4bfd4d944\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8dac8cef5cd87bcb8c4d9762008746dffc051163baa28e7d5a1652fd3dcd9a3\""
Feb 13 18:57:51.619814 containerd[1761]: time="2025-02-13T18:57:51.619782500Z" level=info msg="StartContainer for \"c8dac8cef5cd87bcb8c4d9762008746dffc051163baa28e7d5a1652fd3dcd9a3\""
Feb 13 18:57:51.635003 systemd[1]: Started cri-containerd-7c6f9d2c784d05c0d2226d05e3b6c4d02f8a92dc8017efa4d41309d6eda25eba.scope - libcontainer container 7c6f9d2c784d05c0d2226d05e3b6c4d02f8a92dc8017efa4d41309d6eda25eba.
Feb 13 18:57:51.649854 systemd[1]: Started cri-containerd-c8dac8cef5cd87bcb8c4d9762008746dffc051163baa28e7d5a1652fd3dcd9a3.scope - libcontainer container c8dac8cef5cd87bcb8c4d9762008746dffc051163baa28e7d5a1652fd3dcd9a3.
Feb 13 18:57:51.675586 containerd[1761]: time="2025-02-13T18:57:51.675549377Z" level=info msg="StartContainer for \"7c6f9d2c784d05c0d2226d05e3b6c4d02f8a92dc8017efa4d41309d6eda25eba\" returns successfully"
Feb 13 18:57:51.704165 containerd[1761]: time="2025-02-13T18:57:51.704119736Z" level=info msg="StartContainer for \"c8dac8cef5cd87bcb8c4d9762008746dffc051163baa28e7d5a1652fd3dcd9a3\" returns successfully"
Feb 13 18:57:51.720345 kubelet[3324]: I0213 18:57:51.719575    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pj9n2" podStartSLOduration=19.719560695 podStartE2EDuration="19.719560695s" podCreationTimestamp="2025-02-13 18:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:57:51.719095735 +0000 UTC m=+25.263427746" watchObservedRunningTime="2025-02-13 18:57:51.719560695 +0000 UTC m=+25.263892706"
Feb 13 18:57:52.462650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747049526.mount: Deactivated successfully.
Feb 13 18:57:52.719854 kubelet[3324]: I0213 18:57:52.719429    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sfnj8" podStartSLOduration=20.719408883 podStartE2EDuration="20.719408883s" podCreationTimestamp="2025-02-13 18:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:57:52.703747084 +0000 UTC m=+26.248079095" watchObservedRunningTime="2025-02-13 18:57:52.719408883 +0000 UTC m=+26.263740894"
Feb 13 18:59:19.583627 systemd[1]: Started sshd@7-10.200.20.41:22-10.200.16.10:35632.service - OpenSSH per-connection server daemon (10.200.16.10:35632).
Feb 13 18:59:20.077482 sshd[4718]: Accepted publickey for core from 10.200.16.10 port 35632 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:20.078980 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:20.082832 systemd-logind[1729]: New session 10 of user core.
Feb 13 18:59:20.085863 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 18:59:20.584335 sshd[4720]: Connection closed by 10.200.16.10 port 35632
Feb 13 18:59:20.583444 sshd-session[4718]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:20.586186 systemd[1]: sshd@7-10.200.20.41:22-10.200.16.10:35632.service: Deactivated successfully.
Feb 13 18:59:20.588426 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 18:59:20.592242 systemd-logind[1729]: Session 10 logged out. Waiting for processes to exit.
Feb 13 18:59:20.593665 systemd-logind[1729]: Removed session 10.
Feb 13 18:59:25.671132 systemd[1]: Started sshd@8-10.200.20.41:22-10.200.16.10:35638.service - OpenSSH per-connection server daemon (10.200.16.10:35638).
Feb 13 18:59:26.164100 sshd[4732]: Accepted publickey for core from 10.200.16.10 port 35638 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:26.165462 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:26.169850 systemd-logind[1729]: New session 11 of user core.
Feb 13 18:59:26.175857 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 18:59:26.591016 sshd[4734]: Connection closed by 10.200.16.10 port 35638
Feb 13 18:59:26.590119 sshd-session[4732]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:26.593726 systemd[1]: sshd@8-10.200.20.41:22-10.200.16.10:35638.service: Deactivated successfully.
Feb 13 18:59:26.597051 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 18:59:26.599087 systemd-logind[1729]: Session 11 logged out. Waiting for processes to exit.
Feb 13 18:59:26.600918 systemd-logind[1729]: Removed session 11.
Feb 13 18:59:31.682984 systemd[1]: Started sshd@9-10.200.20.41:22-10.200.16.10:53514.service - OpenSSH per-connection server daemon (10.200.16.10:53514).
Feb 13 18:59:32.129063 sshd[4748]: Accepted publickey for core from 10.200.16.10 port 53514 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:32.130341 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:32.134885 systemd-logind[1729]: New session 12 of user core.
Feb 13 18:59:32.139900 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 18:59:32.511742 sshd[4750]: Connection closed by 10.200.16.10 port 53514
Feb 13 18:59:32.512431 sshd-session[4748]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:32.515097 systemd-logind[1729]: Session 12 logged out. Waiting for processes to exit.
Feb 13 18:59:32.515322 systemd[1]: sshd@9-10.200.20.41:22-10.200.16.10:53514.service: Deactivated successfully.
Feb 13 18:59:32.517371 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 18:59:32.519269 systemd-logind[1729]: Removed session 12.
Feb 13 18:59:37.593929 systemd[1]: Started sshd@10-10.200.20.41:22-10.200.16.10:53520.service - OpenSSH per-connection server daemon (10.200.16.10:53520).
Feb 13 18:59:38.051874 sshd[4763]: Accepted publickey for core from 10.200.16.10 port 53520 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:38.053124 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:38.057432 systemd-logind[1729]: New session 13 of user core.
Feb 13 18:59:38.061868 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 18:59:38.438587 sshd[4765]: Connection closed by 10.200.16.10 port 53520
Feb 13 18:59:38.438494 sshd-session[4763]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:38.440914 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 18:59:38.441614 systemd[1]: sshd@10-10.200.20.41:22-10.200.16.10:53520.service: Deactivated successfully.
Feb 13 18:59:38.444255 systemd-logind[1729]: Session 13 logged out. Waiting for processes to exit.
Feb 13 18:59:38.445426 systemd-logind[1729]: Removed session 13.
Feb 13 18:59:43.527950 systemd[1]: Started sshd@11-10.200.20.41:22-10.200.16.10:44988.service - OpenSSH per-connection server daemon (10.200.16.10:44988).
Feb 13 18:59:43.974064 sshd[4776]: Accepted publickey for core from 10.200.16.10 port 44988 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:43.975403 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:43.979119 systemd-logind[1729]: New session 14 of user core.
Feb 13 18:59:43.987842 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 18:59:44.391873 sshd[4778]: Connection closed by 10.200.16.10 port 44988
Feb 13 18:59:44.392460 sshd-session[4776]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:44.396029 systemd[1]: sshd@11-10.200.20.41:22-10.200.16.10:44988.service: Deactivated successfully.
Feb 13 18:59:44.397742 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 18:59:44.398567 systemd-logind[1729]: Session 14 logged out. Waiting for processes to exit.
Feb 13 18:59:44.399504 systemd-logind[1729]: Removed session 14.
Feb 13 18:59:49.494674 systemd[1]: Started sshd@12-10.200.20.41:22-10.200.16.10:34706.service - OpenSSH per-connection server daemon (10.200.16.10:34706).
Feb 13 18:59:49.977279 sshd[4791]: Accepted publickey for core from 10.200.16.10 port 34706 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:49.979117 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:49.984553 systemd-logind[1729]: New session 15 of user core.
Feb 13 18:59:49.991859 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 18:59:50.395736 sshd[4793]: Connection closed by 10.200.16.10 port 34706
Feb 13 18:59:50.396460 sshd-session[4791]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:50.399949 systemd[1]: sshd@12-10.200.20.41:22-10.200.16.10:34706.service: Deactivated successfully.
Feb 13 18:59:50.402652 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 18:59:50.403729 systemd-logind[1729]: Session 15 logged out. Waiting for processes to exit.
Feb 13 18:59:50.404586 systemd-logind[1729]: Removed session 15.
Feb 13 18:59:50.486994 systemd[1]: Started sshd@13-10.200.20.41:22-10.200.16.10:34712.service - OpenSSH per-connection server daemon (10.200.16.10:34712).
Feb 13 18:59:50.987061 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 34712 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:50.988391 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:50.992318 systemd-logind[1729]: New session 16 of user core.
Feb 13 18:59:51.001896 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 18:59:51.461070 sshd[4807]: Connection closed by 10.200.16.10 port 34712
Feb 13 18:59:51.461643 sshd-session[4805]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:51.465086 systemd[1]: sshd@13-10.200.20.41:22-10.200.16.10:34712.service: Deactivated successfully.
Feb 13 18:59:51.467267 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 18:59:51.468171 systemd-logind[1729]: Session 16 logged out. Waiting for processes to exit.
Feb 13 18:59:51.469099 systemd-logind[1729]: Removed session 16.
Feb 13 18:59:51.550970 systemd[1]: Started sshd@14-10.200.20.41:22-10.200.16.10:34726.service - OpenSSH per-connection server daemon (10.200.16.10:34726).
Feb 13 18:59:52.036458 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 34726 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:52.037813 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:52.042259 systemd-logind[1729]: New session 17 of user core.
Feb 13 18:59:52.048903 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 18:59:52.450187 sshd[4818]: Connection closed by 10.200.16.10 port 34726
Feb 13 18:59:52.450789 sshd-session[4816]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:52.454087 systemd[1]: sshd@14-10.200.20.41:22-10.200.16.10:34726.service: Deactivated successfully.
Feb 13 18:59:52.455626 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 18:59:52.456930 systemd-logind[1729]: Session 17 logged out. Waiting for processes to exit.
Feb 13 18:59:52.457841 systemd-logind[1729]: Removed session 17.
Feb 13 18:59:57.531390 systemd[1]: Started sshd@15-10.200.20.41:22-10.200.16.10:34732.service - OpenSSH per-connection server daemon (10.200.16.10:34732).
Feb 13 18:59:57.983506 sshd[4829]: Accepted publickey for core from 10.200.16.10 port 34732 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 18:59:57.984872 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:59:57.988674 systemd-logind[1729]: New session 18 of user core.
Feb 13 18:59:57.992849 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 18:59:58.368729 sshd[4831]: Connection closed by 10.200.16.10 port 34732
Feb 13 18:59:58.369343 sshd-session[4829]: pam_unix(sshd:session): session closed for user core
Feb 13 18:59:58.372639 systemd[1]: sshd@15-10.200.20.41:22-10.200.16.10:34732.service: Deactivated successfully.
Feb 13 18:59:58.374328 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 18:59:58.375133 systemd-logind[1729]: Session 18 logged out. Waiting for processes to exit.
Feb 13 18:59:58.376155 systemd-logind[1729]: Removed session 18.
Feb 13 19:00:03.451361 systemd[1]: Started sshd@16-10.200.20.41:22-10.200.16.10:48852.service - OpenSSH per-connection server daemon (10.200.16.10:48852).
Feb 13 19:00:03.906381 sshd[4844]: Accepted publickey for core from 10.200.16.10 port 48852 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:03.907733 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:03.911535 systemd-logind[1729]: New session 19 of user core.
Feb 13 19:00:03.918849 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 19:00:04.316389 sshd[4846]: Connection closed by 10.200.16.10 port 48852
Feb 13 19:00:04.316944 sshd-session[4844]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:04.320453 systemd[1]: sshd@16-10.200.20.41:22-10.200.16.10:48852.service: Deactivated successfully.
Feb 13 19:00:04.322140 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 19:00:04.322825 systemd-logind[1729]: Session 19 logged out. Waiting for processes to exit.
Feb 13 19:00:04.323611 systemd-logind[1729]: Removed session 19.
Feb 13 19:00:04.401962 systemd[1]: Started sshd@17-10.200.20.41:22-10.200.16.10:48862.service - OpenSSH per-connection server daemon (10.200.16.10:48862).
Feb 13 19:00:04.847878 sshd[4856]: Accepted publickey for core from 10.200.16.10 port 48862 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:04.849179 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:04.853407 systemd-logind[1729]: New session 20 of user core.
Feb 13 19:00:04.859886 systemd[1]: Started session-20.scope - Session 20 of User core.
Feb 13 19:00:05.260921 sshd[4858]: Connection closed by 10.200.16.10 port 48862
Feb 13 19:00:05.261080 sshd-session[4856]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:05.263719 systemd[1]: sshd@17-10.200.20.41:22-10.200.16.10:48862.service: Deactivated successfully.
Feb 13 19:00:05.266450 systemd[1]: session-20.scope: Deactivated successfully.
Feb 13 19:00:05.268238 systemd-logind[1729]: Session 20 logged out. Waiting for processes to exit.
Feb 13 19:00:05.269502 systemd-logind[1729]: Removed session 20.
Feb 13 19:00:05.353012 systemd[1]: Started sshd@18-10.200.20.41:22-10.200.16.10:48874.service - OpenSSH per-connection server daemon (10.200.16.10:48874).
Feb 13 19:00:05.800882 sshd[4867]: Accepted publickey for core from 10.200.16.10 port 48874 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:05.802197 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:05.806890 systemd-logind[1729]: New session 21 of user core.
Feb 13 19:00:05.812897 systemd[1]: Started session-21.scope - Session 21 of User core.
Feb 13 19:00:07.076448 sshd[4869]: Connection closed by 10.200.16.10 port 48874
Feb 13 19:00:07.077125 sshd-session[4867]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:07.080627 systemd[1]: sshd@18-10.200.20.41:22-10.200.16.10:48874.service: Deactivated successfully.
Feb 13 19:00:07.082888 systemd[1]: session-21.scope: Deactivated successfully.
Feb 13 19:00:07.083879 systemd-logind[1729]: Session 21 logged out. Waiting for processes to exit.
Feb 13 19:00:07.085287 systemd-logind[1729]: Removed session 21.
Feb 13 19:00:07.162491 systemd[1]: Started sshd@19-10.200.20.41:22-10.200.16.10:48884.service - OpenSSH per-connection server daemon (10.200.16.10:48884).
Feb 13 19:00:07.650832 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 48884 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:07.652157 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:07.656737 systemd-logind[1729]: New session 22 of user core.
Feb 13 19:00:07.660856 systemd[1]: Started session-22.scope - Session 22 of User core.
Feb 13 19:00:08.184674 sshd[4888]: Connection closed by 10.200.16.10 port 48884
Feb 13 19:00:08.185044 sshd-session[4886]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:08.188873 systemd[1]: sshd@19-10.200.20.41:22-10.200.16.10:48884.service: Deactivated successfully.
Feb 13 19:00:08.190436 systemd[1]: session-22.scope: Deactivated successfully.
Feb 13 19:00:08.191156 systemd-logind[1729]: Session 22 logged out. Waiting for processes to exit.
Feb 13 19:00:08.192340 systemd-logind[1729]: Removed session 22.
Feb 13 19:00:08.279864 systemd[1]: Started sshd@20-10.200.20.41:22-10.200.16.10:48890.service - OpenSSH per-connection server daemon (10.200.16.10:48890).
Feb 13 19:00:08.764970 sshd[4897]: Accepted publickey for core from 10.200.16.10 port 48890 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:08.766292 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:08.770652 systemd-logind[1729]: New session 23 of user core.
Feb 13 19:00:08.778860 systemd[1]: Started session-23.scope - Session 23 of User core.
Feb 13 19:00:09.192726 sshd[4899]: Connection closed by 10.200.16.10 port 48890
Feb 13 19:00:09.193352 sshd-session[4897]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:09.197263 systemd[1]: sshd@20-10.200.20.41:22-10.200.16.10:48890.service: Deactivated successfully.
Feb 13 19:00:09.199013 systemd[1]: session-23.scope: Deactivated successfully.
Feb 13 19:00:09.200267 systemd-logind[1729]: Session 23 logged out. Waiting for processes to exit.
Feb 13 19:00:09.202266 systemd-logind[1729]: Removed session 23.
Feb 13 19:00:14.280079 systemd[1]: Started sshd@21-10.200.20.41:22-10.200.16.10:48006.service - OpenSSH per-connection server daemon (10.200.16.10:48006).
Feb 13 19:00:14.726836 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 48006 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:14.728173 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:14.732858 systemd-logind[1729]: New session 24 of user core.
Feb 13 19:00:14.740090 systemd[1]: Started session-24.scope - Session 24 of User core.
Feb 13 19:00:15.132795 sshd[4913]: Connection closed by 10.200.16.10 port 48006
Feb 13 19:00:15.133357 sshd-session[4911]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:15.136222 systemd-logind[1729]: Session 24 logged out. Waiting for processes to exit.
Feb 13 19:00:15.137850 systemd[1]: sshd@21-10.200.20.41:22-10.200.16.10:48006.service: Deactivated successfully.
Feb 13 19:00:15.139937 systemd[1]: session-24.scope: Deactivated successfully.
Feb 13 19:00:15.141178 systemd-logind[1729]: Removed session 24.
Feb 13 19:00:20.221878 systemd[1]: Started sshd@22-10.200.20.41:22-10.200.16.10:43166.service - OpenSSH per-connection server daemon (10.200.16.10:43166).
Feb 13 19:00:20.714737 sshd[4923]: Accepted publickey for core from 10.200.16.10 port 43166 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:20.716525 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:20.721129 systemd-logind[1729]: New session 25 of user core.
Feb 13 19:00:20.729920 systemd[1]: Started session-25.scope - Session 25 of User core.
Feb 13 19:00:21.126733 sshd[4925]: Connection closed by 10.200.16.10 port 43166
Feb 13 19:00:21.127278 sshd-session[4923]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:21.130015 systemd-logind[1729]: Session 25 logged out. Waiting for processes to exit.
Feb 13 19:00:21.131781 systemd[1]: sshd@22-10.200.20.41:22-10.200.16.10:43166.service: Deactivated successfully.
Feb 13 19:00:21.134081 systemd[1]: session-25.scope: Deactivated successfully.
Feb 13 19:00:21.135864 systemd-logind[1729]: Removed session 25.
Feb 13 19:00:26.218985 systemd[1]: Started sshd@23-10.200.20.41:22-10.200.16.10:43182.service - OpenSSH per-connection server daemon (10.200.16.10:43182).
Feb 13 19:00:26.706090 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 43182 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:26.707487 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:26.712014 systemd-logind[1729]: New session 26 of user core.
Feb 13 19:00:26.720893 systemd[1]: Started session-26.scope - Session 26 of User core.
Feb 13 19:00:27.138365 sshd[4940]: Connection closed by 10.200.16.10 port 43182
Feb 13 19:00:27.138968 sshd-session[4936]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:27.142445 systemd[1]: sshd@23-10.200.20.41:22-10.200.16.10:43182.service: Deactivated successfully.
Feb 13 19:00:27.144275 systemd[1]: session-26.scope: Deactivated successfully.
Feb 13 19:00:27.145351 systemd-logind[1729]: Session 26 logged out. Waiting for processes to exit.
Feb 13 19:00:27.146421 systemd-logind[1729]: Removed session 26.
Feb 13 19:00:27.224905 systemd[1]: Started sshd@24-10.200.20.41:22-10.200.16.10:43196.service - OpenSSH per-connection server daemon (10.200.16.10:43196).
Feb 13 19:00:27.711948 sshd[4951]: Accepted publickey for core from 10.200.16.10 port 43196 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:27.713903 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:27.720422 systemd-logind[1729]: New session 27 of user core.
Feb 13 19:00:27.725014 systemd[1]: Started session-27.scope - Session 27 of User core.
Feb 13 19:00:30.276160 containerd[1761]: time="2025-02-13T19:00:30.275208024Z" level=info msg="StopContainer for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" with timeout 30 (s)"
Feb 13 19:00:30.276160 containerd[1761]: time="2025-02-13T19:00:30.275718024Z" level=info msg="Stop container \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" with signal terminated"
Feb 13 19:00:30.296171 systemd[1]: cri-containerd-ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f.scope: Deactivated successfully.
Feb 13 19:00:30.303477 containerd[1761]: time="2025-02-13T19:00:30.303426055Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 19:00:30.313096 containerd[1761]: time="2025-02-13T19:00:30.312977092Z" level=info msg="StopContainer for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" with timeout 2 (s)"
Feb 13 19:00:30.313685 containerd[1761]: time="2025-02-13T19:00:30.313560292Z" level=info msg="Stop container \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" with signal terminated"
Feb 13 19:00:30.321162 systemd-networkd[1337]: lxc_health: Link DOWN
Feb 13 19:00:30.321172 systemd-networkd[1337]: lxc_health: Lost carrier
Feb 13 19:00:30.329825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f-rootfs.mount: Deactivated successfully.
Feb 13 19:00:30.336879 systemd[1]: cri-containerd-b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a.scope: Deactivated successfully.
Feb 13 19:00:30.337513 systemd[1]: cri-containerd-b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a.scope: Consumed 6.431s CPU time.
Feb 13 19:00:30.360186 containerd[1761]: time="2025-02-13T19:00:30.358541598Z" level=info msg="shim disconnected" id=ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f namespace=k8s.io
Feb 13 19:00:30.360186 containerd[1761]: time="2025-02-13T19:00:30.358614158Z" level=warning msg="cleaning up after shim disconnected" id=ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f namespace=k8s.io
Feb 13 19:00:30.360186 containerd[1761]: time="2025-02-13T19:00:30.358623518Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:30.359964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a-rootfs.mount: Deactivated successfully.
Feb 13 19:00:30.373980 containerd[1761]: time="2025-02-13T19:00:30.373904834Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:00:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 19:00:30.375785 containerd[1761]: time="2025-02-13T19:00:30.375730633Z" level=info msg="shim disconnected" id=b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a namespace=k8s.io
Feb 13 19:00:30.376028 containerd[1761]: time="2025-02-13T19:00:30.375890353Z" level=warning msg="cleaning up after shim disconnected" id=b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a namespace=k8s.io
Feb 13 19:00:30.376028 containerd[1761]: time="2025-02-13T19:00:30.375905913Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:30.381908 containerd[1761]: time="2025-02-13T19:00:30.381853111Z" level=info msg="StopContainer for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" returns successfully"
Feb 13 19:00:30.383172 containerd[1761]: time="2025-02-13T19:00:30.383136871Z" level=info msg="StopPodSandbox for \"ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754\""
Feb 13 19:00:30.383434 containerd[1761]: time="2025-02-13T19:00:30.383319271Z" level=info msg="Container to stop \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 19:00:30.388652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754-shm.mount: Deactivated successfully.
Feb 13 19:00:30.393474 systemd[1]: cri-containerd-ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754.scope: Deactivated successfully.
Feb 13 19:00:30.404869 containerd[1761]: time="2025-02-13T19:00:30.404803984Z" level=info msg="StopContainer for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" returns successfully"
Feb 13 19:00:30.405781 containerd[1761]: time="2025-02-13T19:00:30.405744264Z" level=info msg="StopPodSandbox for \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\""
Feb 13 19:00:30.405839 containerd[1761]: time="2025-02-13T19:00:30.405819184Z" level=info msg="Container to stop \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 19:00:30.405839 containerd[1761]: time="2025-02-13T19:00:30.405833704Z" level=info msg="Container to stop \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 19:00:30.405890 containerd[1761]: time="2025-02-13T19:00:30.405843344Z" level=info msg="Container to stop \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 19:00:30.405890 containerd[1761]: time="2025-02-13T19:00:30.405851784Z" level=info msg="Container to stop \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 19:00:30.405890 containerd[1761]: time="2025-02-13T19:00:30.405860184Z" level=info msg="Container to stop \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 19:00:30.408344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503-shm.mount: Deactivated successfully.
Feb 13 19:00:30.418168 systemd[1]: cri-containerd-d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503.scope: Deactivated successfully.
Feb 13 19:00:30.443598 containerd[1761]: time="2025-02-13T19:00:30.443272932Z" level=info msg="shim disconnected" id=ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754 namespace=k8s.io
Feb 13 19:00:30.443954 containerd[1761]: time="2025-02-13T19:00:30.443555092Z" level=warning msg="cleaning up after shim disconnected" id=ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754 namespace=k8s.io
Feb 13 19:00:30.443954 containerd[1761]: time="2025-02-13T19:00:30.443806572Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:30.445944 containerd[1761]: time="2025-02-13T19:00:30.443515332Z" level=info msg="shim disconnected" id=d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503 namespace=k8s.io
Feb 13 19:00:30.445944 containerd[1761]: time="2025-02-13T19:00:30.444901932Z" level=warning msg="cleaning up after shim disconnected" id=d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503 namespace=k8s.io
Feb 13 19:00:30.445944 containerd[1761]: time="2025-02-13T19:00:30.444915172Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:30.461161 containerd[1761]: time="2025-02-13T19:00:30.461114727Z" level=info msg="TearDown network for sandbox \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" successfully"
Feb 13 19:00:30.461343 containerd[1761]: time="2025-02-13T19:00:30.461325567Z" level=info msg="StopPodSandbox for \"d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503\" returns successfully"
Feb 13 19:00:30.461468 containerd[1761]: time="2025-02-13T19:00:30.461154727Z" level=info msg="TearDown network for sandbox \"ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754\" successfully"
Feb 13 19:00:30.461468 containerd[1761]: time="2025-02-13T19:00:30.461458087Z" level=info msg="StopPodSandbox for \"ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754\" returns successfully"
Feb 13 19:00:30.614311 kubelet[3324]: I0213 19:00:30.613873    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-net\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.614311 kubelet[3324]: I0213 19:00:30.613921    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-etc-cni-netd\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.614311 kubelet[3324]: I0213 19:00:30.613946    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh72b\" (UniqueName: \"kubernetes.io/projected/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-kube-api-access-wh72b\") pod \"ee6beb99-56c1-4c5b-8b0d-7ada8e046484\" (UID: \"ee6beb99-56c1-4c5b-8b0d-7ada8e046484\") "
Feb 13 19:00:30.614311 kubelet[3324]: I0213 19:00:30.613963    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-xtables-lock\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.614311 kubelet[3324]: I0213 19:00:30.613966    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.614311 kubelet[3324]: I0213 19:00:30.613980    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47tx6\" (UniqueName: \"kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-kube-api-access-47tx6\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616464 kubelet[3324]: I0213 19:00:30.614002    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hubble-tls\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616464 kubelet[3324]: I0213 19:00:30.614017    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-run\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616464 kubelet[3324]: I0213 19:00:30.614036    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-cilium-config-path\") pod \"ee6beb99-56c1-4c5b-8b0d-7ada8e046484\" (UID: \"ee6beb99-56c1-4c5b-8b0d-7ada8e046484\") "
Feb 13 19:00:30.616464 kubelet[3324]: I0213 19:00:30.614058    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hostproc\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616464 kubelet[3324]: I0213 19:00:30.614074    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-cgroup\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616464 kubelet[3324]: I0213 19:00:30.614090    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-bpf-maps\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616605 kubelet[3324]: I0213 19:00:30.614103    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cni-path\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616605 kubelet[3324]: I0213 19:00:30.614147    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-config-path\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616605 kubelet[3324]: I0213 19:00:30.614168    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-lib-modules\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616605 kubelet[3324]: I0213 19:00:30.614188    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d99ef4a3-aa74-44c9-b6e8-9df48433774c-clustermesh-secrets\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616605 kubelet[3324]: I0213 19:00:30.614204    3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-kernel\") pod \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\" (UID: \"d99ef4a3-aa74-44c9-b6e8-9df48433774c\") "
Feb 13 19:00:30.616605 kubelet[3324]: I0213 19:00:30.614241    3324 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-net\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.616750 kubelet[3324]: I0213 19:00:30.614273    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.616750 kubelet[3324]: I0213 19:00:30.614296    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.617096 kubelet[3324]: I0213 19:00:30.617048    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.617182 kubelet[3324]: I0213 19:00:30.617156    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-kube-api-access-47tx6" (OuterVolumeSpecName: "kube-api-access-47tx6") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "kube-api-access-47tx6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Feb 13 19:00:30.617630 kubelet[3324]: I0213 19:00:30.617597    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.617630 kubelet[3324]: I0213 19:00:30.617631    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.619427 kubelet[3324]: I0213 19:00:30.619398    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cni-path" (OuterVolumeSpecName: "cni-path") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.620621 kubelet[3324]: I0213 19:00:30.620529    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee6beb99-56c1-4c5b-8b0d-7ada8e046484" (UID: "ee6beb99-56c1-4c5b-8b0d-7ada8e046484"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Feb 13 19:00:30.620621 kubelet[3324]: I0213 19:00:30.620556    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hostproc" (OuterVolumeSpecName: "hostproc") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.620621 kubelet[3324]: I0213 19:00:30.620568    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.620949 kubelet[3324]: I0213 19:00:30.620685    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-kube-api-access-wh72b" (OuterVolumeSpecName: "kube-api-access-wh72b") pod "ee6beb99-56c1-4c5b-8b0d-7ada8e046484" (UID: "ee6beb99-56c1-4c5b-8b0d-7ada8e046484"). InnerVolumeSpecName "kube-api-access-wh72b". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Feb 13 19:00:30.620949 kubelet[3324]: I0213 19:00:30.620926    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 19:00:30.622464 kubelet[3324]: I0213 19:00:30.622427    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Feb 13 19:00:30.622764 kubelet[3324]: I0213 19:00:30.622681    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Feb 13 19:00:30.623638 kubelet[3324]: I0213 19:00:30.623581    3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99ef4a3-aa74-44c9-b6e8-9df48433774c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d99ef4a3-aa74-44c9-b6e8-9df48433774c" (UID: "d99ef4a3-aa74-44c9-b6e8-9df48433774c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue ""
Feb 13 19:00:30.715128 kubelet[3324]: I0213 19:00:30.715090    3324 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47tx6\" (UniqueName: \"kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-kube-api-access-47tx6\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715128 kubelet[3324]: I0213 19:00:30.715122    3324 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hubble-tls\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715128 kubelet[3324]: I0213 19:00:30.715132    3324 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-hostproc\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715128 kubelet[3324]: I0213 19:00:30.715140    3324 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-run\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715149    3324 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-cilium-config-path\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715158    3324 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-cgroup\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715166    3324 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-bpf-maps\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715173    3324 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cni-path\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715181    3324 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-lib-modules\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715189    3324 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d99ef4a3-aa74-44c9-b6e8-9df48433774c-clustermesh-secrets\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715203    3324 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-host-proc-sys-kernel\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715355 kubelet[3324]: I0213 19:00:30.715212    3324 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99ef4a3-aa74-44c9-b6e8-9df48433774c-cilium-config-path\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715517 kubelet[3324]: I0213 19:00:30.715220    3324 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-etc-cni-netd\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715517 kubelet[3324]: I0213 19:00:30.715231    3324 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wh72b\" (UniqueName: \"kubernetes.io/projected/ee6beb99-56c1-4c5b-8b0d-7ada8e046484-kube-api-access-wh72b\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.715517 kubelet[3324]: I0213 19:00:30.715240    3324 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d99ef4a3-aa74-44c9-b6e8-9df48433774c-xtables-lock\") on node \"ci-4186.1.1-a-21f48afc48\" DevicePath \"\""
Feb 13 19:00:30.976845 kubelet[3324]: I0213 19:00:30.975870    3324 scope.go:117] "RemoveContainer" containerID="ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f"
Feb 13 19:00:30.979521 containerd[1761]: time="2025-02-13T19:00:30.979428687Z" level=info msg="RemoveContainer for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\""
Feb 13 19:00:30.984768 systemd[1]: Removed slice kubepods-besteffort-podee6beb99_56c1_4c5b_8b0d_7ada8e046484.slice - libcontainer container kubepods-besteffort-podee6beb99_56c1_4c5b_8b0d_7ada8e046484.slice.
Feb 13 19:00:30.995015 containerd[1761]: time="2025-02-13T19:00:30.994936602Z" level=info msg="RemoveContainer for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" returns successfully"
Feb 13 19:00:30.995403 kubelet[3324]: I0213 19:00:30.995370    3324 scope.go:117] "RemoveContainer" containerID="ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f"
Feb 13 19:00:30.995557 systemd[1]: Removed slice kubepods-burstable-podd99ef4a3_aa74_44c9_b6e8_9df48433774c.slice - libcontainer container kubepods-burstable-podd99ef4a3_aa74_44c9_b6e8_9df48433774c.slice.
Feb 13 19:00:30.995657 systemd[1]: kubepods-burstable-podd99ef4a3_aa74_44c9_b6e8_9df48433774c.slice: Consumed 6.503s CPU time.
Feb 13 19:00:30.996583 containerd[1761]: time="2025-02-13T19:00:30.996400162Z" level=error msg="ContainerStatus for \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\": not found"
Feb 13 19:00:30.997014 kubelet[3324]: E0213 19:00:30.996917    3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\": not found" containerID="ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f"
Feb 13 19:00:30.997091 kubelet[3324]: I0213 19:00:30.996960    3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f"} err="failed to get container status \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca0de22be8ca8725b5255886ad23f178eb54bef59eb39d7763e9af827f7c7b5f\": not found"
Feb 13 19:00:30.997091 kubelet[3324]: I0213 19:00:30.997056    3324 scope.go:117] "RemoveContainer" containerID="b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a"
Feb 13 19:00:31.000329 containerd[1761]: time="2025-02-13T19:00:31.000287881Z" level=info msg="RemoveContainer for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\""
Feb 13 19:00:31.011954 containerd[1761]: time="2025-02-13T19:00:31.011864117Z" level=info msg="RemoveContainer for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" returns successfully"
Feb 13 19:00:31.012247 kubelet[3324]: I0213 19:00:31.012193    3324 scope.go:117] "RemoveContainer" containerID="35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c"
Feb 13 19:00:31.013368 containerd[1761]: time="2025-02-13T19:00:31.013330917Z" level=info msg="RemoveContainer for \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\""
Feb 13 19:00:31.022326 containerd[1761]: time="2025-02-13T19:00:31.021858554Z" level=info msg="RemoveContainer for \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\" returns successfully"
Feb 13 19:00:31.022655 kubelet[3324]: I0213 19:00:31.022626    3324 scope.go:117] "RemoveContainer" containerID="36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8"
Feb 13 19:00:31.026444 containerd[1761]: time="2025-02-13T19:00:31.026398592Z" level=info msg="RemoveContainer for \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\""
Feb 13 19:00:31.034181 containerd[1761]: time="2025-02-13T19:00:31.034097630Z" level=info msg="RemoveContainer for \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\" returns successfully"
Feb 13 19:00:31.034393 kubelet[3324]: I0213 19:00:31.034364    3324 scope.go:117] "RemoveContainer" containerID="a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157"
Feb 13 19:00:31.035741 containerd[1761]: time="2025-02-13T19:00:31.035601190Z" level=info msg="RemoveContainer for \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\""
Feb 13 19:00:31.045105 containerd[1761]: time="2025-02-13T19:00:31.045032867Z" level=info msg="RemoveContainer for \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\" returns successfully"
Feb 13 19:00:31.045543 kubelet[3324]: I0213 19:00:31.045518    3324 scope.go:117] "RemoveContainer" containerID="ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae"
Feb 13 19:00:31.046792 containerd[1761]: time="2025-02-13T19:00:31.046744906Z" level=info msg="RemoveContainer for \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\""
Feb 13 19:00:31.057445 containerd[1761]: time="2025-02-13T19:00:31.057399583Z" level=info msg="RemoveContainer for \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\" returns successfully"
Feb 13 19:00:31.057984 kubelet[3324]: I0213 19:00:31.057868    3324 scope.go:117] "RemoveContainer" containerID="b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a"
Feb 13 19:00:31.058509 containerd[1761]: time="2025-02-13T19:00:31.058225423Z" level=error msg="ContainerStatus for \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\": not found"
Feb 13 19:00:31.058569 kubelet[3324]: E0213 19:00:31.058377    3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\": not found" containerID="b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a"
Feb 13 19:00:31.058569 kubelet[3324]: I0213 19:00:31.058412    3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a"} err="failed to get container status \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b59858dbba0e1b90a4f4a719decc4efb135a77f4964e7df4964525b76b83310a\": not found"
Feb 13 19:00:31.058569 kubelet[3324]: I0213 19:00:31.058435    3324 scope.go:117] "RemoveContainer" containerID="35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c"
Feb 13 19:00:31.059127 containerd[1761]: time="2025-02-13T19:00:31.058886742Z" level=error msg="ContainerStatus for \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\": not found"
Feb 13 19:00:31.059202 kubelet[3324]: E0213 19:00:31.059071    3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\": not found" containerID="35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c"
Feb 13 19:00:31.059436 kubelet[3324]: I0213 19:00:31.059265    3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c"} err="failed to get container status \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\": rpc error: code = NotFound desc = an error occurred when try to find container \"35856ad3deb083434b660b224760d0769b60ec84b6a77db550254881fe11146c\": not found"
Feb 13 19:00:31.059436 kubelet[3324]: I0213 19:00:31.059294    3324 scope.go:117] "RemoveContainer" containerID="36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8"
Feb 13 19:00:31.059767 containerd[1761]: time="2025-02-13T19:00:31.059660542Z" level=error msg="ContainerStatus for \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\": not found"
Feb 13 19:00:31.059913 kubelet[3324]: E0213 19:00:31.059883    3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\": not found" containerID="36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8"
Feb 13 19:00:31.059959 kubelet[3324]: I0213 19:00:31.059914    3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8"} err="failed to get container status \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"36079ecfabd902286aa60f077a9819bde40c6d4039d54d198c8cb3803a4687a8\": not found"
Feb 13 19:00:31.059959 kubelet[3324]: I0213 19:00:31.059933    3324 scope.go:117] "RemoveContainer" containerID="a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157"
Feb 13 19:00:31.060245 containerd[1761]: time="2025-02-13T19:00:31.060132022Z" level=error msg="ContainerStatus for \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\": not found"
Feb 13 19:00:31.060481 kubelet[3324]: E0213 19:00:31.060363    3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\": not found" containerID="a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157"
Feb 13 19:00:31.060481 kubelet[3324]: I0213 19:00:31.060391    3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157"} err="failed to get container status \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\": rpc error: code = NotFound desc = an error occurred when try to find container \"a66c96b5cfa2398aac76d459a112132e6951ef946e0a7f04db20fb1ff9905157\": not found"
Feb 13 19:00:31.060481 kubelet[3324]: I0213 19:00:31.060407    3324 scope.go:117] "RemoveContainer" containerID="ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae"
Feb 13 19:00:31.060622 containerd[1761]: time="2025-02-13T19:00:31.060579702Z" level=error msg="ContainerStatus for \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\": not found"
Feb 13 19:00:31.060834 kubelet[3324]: E0213 19:00:31.060808    3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\": not found" containerID="ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae"
Feb 13 19:00:31.060905 kubelet[3324]: I0213 19:00:31.060884    3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae"} err="failed to get container status \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed48bddb1b29e0fa88d8797b658d9d51ebc534589b4db4293ffe22dfe62012ae\": not found"
Feb 13 19:00:31.285292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecb231087cff887248a00055064b6891579ee8e04d257113f4cb9881091bb754-rootfs.mount: Deactivated successfully.
Feb 13 19:00:31.286099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2cabf376899d524216ffc77afe4642018415fd70dd9366dbfaaeefb174ef503-rootfs.mount: Deactivated successfully.
Feb 13 19:00:31.286517 systemd[1]: var-lib-kubelet-pods-ee6beb99\x2d56c1\x2d4c5b\x2d8b0d\x2d7ada8e046484-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwh72b.mount: Deactivated successfully.
Feb 13 19:00:31.286588 systemd[1]: var-lib-kubelet-pods-d99ef4a3\x2daa74\x2d44c9\x2db6e8\x2d9df48433774c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47tx6.mount: Deactivated successfully.
Feb 13 19:00:31.286636 systemd[1]: var-lib-kubelet-pods-d99ef4a3\x2daa74\x2d44c9\x2db6e8\x2d9df48433774c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 19:00:31.286685 systemd[1]: var-lib-kubelet-pods-d99ef4a3\x2daa74\x2d44c9\x2db6e8\x2d9df48433774c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 19:00:31.663589 kubelet[3324]: E0213 19:00:31.663495    3324 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 19:00:32.296196 sshd[4956]: Connection closed by 10.200.16.10 port 43196
Feb 13 19:00:32.295280 sshd-session[4951]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:32.299297 systemd[1]: sshd@24-10.200.20.41:22-10.200.16.10:43196.service: Deactivated successfully.
Feb 13 19:00:32.301419 systemd[1]: session-27.scope: Deactivated successfully.
Feb 13 19:00:32.301653 systemd[1]: session-27.scope: Consumed 1.660s CPU time.
Feb 13 19:00:32.302389 systemd-logind[1729]: Session 27 logged out. Waiting for processes to exit.
Feb 13 19:00:32.303778 systemd-logind[1729]: Removed session 27.
Feb 13 19:00:32.389359 systemd[1]: Started sshd@25-10.200.20.41:22-10.200.16.10:40736.service - OpenSSH per-connection server daemon (10.200.16.10:40736).
Feb 13 19:00:32.565648 kubelet[3324]: I0213 19:00:32.564839    3324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99ef4a3-aa74-44c9-b6e8-9df48433774c" path="/var/lib/kubelet/pods/d99ef4a3-aa74-44c9-b6e8-9df48433774c/volumes"
Feb 13 19:00:32.565648 kubelet[3324]: I0213 19:00:32.565386    3324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee6beb99-56c1-4c5b-8b0d-7ada8e046484" path="/var/lib/kubelet/pods/ee6beb99-56c1-4c5b-8b0d-7ada8e046484/volumes"
Feb 13 19:00:32.879040 sshd[5114]: Accepted publickey for core from 10.200.16.10 port 40736 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:32.880427 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:32.885680 systemd-logind[1729]: New session 28 of user core.
Feb 13 19:00:32.895949 systemd[1]: Started session-28.scope - Session 28 of User core.
Feb 13 19:00:34.393499 kubelet[3324]: I0213 19:00:34.393448    3324 memory_manager.go:355] "RemoveStaleState removing state" podUID="d99ef4a3-aa74-44c9-b6e8-9df48433774c" containerName="cilium-agent"
Feb 13 19:00:34.393499 kubelet[3324]: I0213 19:00:34.393482    3324 memory_manager.go:355] "RemoveStaleState removing state" podUID="ee6beb99-56c1-4c5b-8b0d-7ada8e046484" containerName="cilium-operator"
Feb 13 19:00:34.418670 systemd[1]: Created slice kubepods-burstable-pod489ee3b4_9ec2_40e1_91a0_8ff67d8614da.slice - libcontainer container kubepods-burstable-pod489ee3b4_9ec2_40e1_91a0_8ff67d8614da.slice.
Feb 13 19:00:34.471194 sshd[5116]: Connection closed by 10.200.16.10 port 40736
Feb 13 19:00:34.472028 sshd-session[5114]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:34.474896 systemd[1]: sshd@25-10.200.20.41:22-10.200.16.10:40736.service: Deactivated successfully.
Feb 13 19:00:34.477239 systemd[1]: session-28.scope: Deactivated successfully.
Feb 13 19:00:34.477472 systemd[1]: session-28.scope: Consumed 1.152s CPU time.
Feb 13 19:00:34.479174 systemd-logind[1729]: Session 28 logged out. Waiting for processes to exit.
Feb 13 19:00:34.480083 systemd-logind[1729]: Removed session 28.
Feb 13 19:00:34.536785 kubelet[3324]: I0213 19:00:34.536592    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-xtables-lock\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.536785 kubelet[3324]: I0213 19:00:34.536637    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-cilium-run\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.536785 kubelet[3324]: I0213 19:00:34.536662    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-etc-cni-netd\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.536785 kubelet[3324]: I0213 19:00:34.536676    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-lib-modules\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.536785 kubelet[3324]: I0213 19:00:34.536718    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-cilium-config-path\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.536785 kubelet[3324]: I0213 19:00:34.536735    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-host-proc-sys-net\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537050 kubelet[3324]: I0213 19:00:34.536750    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-hubble-tls\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537050 kubelet[3324]: I0213 19:00:34.536768    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-clustermesh-secrets\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537050 kubelet[3324]: I0213 19:00:34.536783    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5lf\" (UniqueName: \"kubernetes.io/projected/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-kube-api-access-8m5lf\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537050 kubelet[3324]: I0213 19:00:34.536800    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-cilium-cgroup\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537050 kubelet[3324]: I0213 19:00:34.536818    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-bpf-maps\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537050 kubelet[3324]: I0213 19:00:34.536833    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-cni-path\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537177 kubelet[3324]: I0213 19:00:34.536849    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-cilium-ipsec-secrets\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537177 kubelet[3324]: I0213 19:00:34.536863    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-host-proc-sys-kernel\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.537177 kubelet[3324]: I0213 19:00:34.536879    3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/489ee3b4-9ec2-40e1-91a0-8ff67d8614da-hostproc\") pod \"cilium-l4km8\" (UID: \"489ee3b4-9ec2-40e1-91a0-8ff67d8614da\") " pod="kube-system/cilium-l4km8"
Feb 13 19:00:34.573004 systemd[1]: Started sshd@26-10.200.20.41:22-10.200.16.10:40748.service - OpenSSH per-connection server daemon (10.200.16.10:40748).
Feb 13 19:00:34.723684 containerd[1761]: time="2025-02-13T19:00:34.723575893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4km8,Uid:489ee3b4-9ec2-40e1-91a0-8ff67d8614da,Namespace:kube-system,Attempt:0,}"
Feb 13 19:00:34.760973 containerd[1761]: time="2025-02-13T19:00:34.760812721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 19:00:34.760973 containerd[1761]: time="2025-02-13T19:00:34.760880281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 19:00:34.760973 containerd[1761]: time="2025-02-13T19:00:34.760895761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:00:34.761583 containerd[1761]: time="2025-02-13T19:00:34.761521521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 19:00:34.776922 systemd[1]: Started cri-containerd-a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994.scope - libcontainer container a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994.
Feb 13 19:00:34.801499 containerd[1761]: time="2025-02-13T19:00:34.801456469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4km8,Uid:489ee3b4-9ec2-40e1-91a0-8ff67d8614da,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\""
Feb 13 19:00:34.806081 containerd[1761]: time="2025-02-13T19:00:34.806039227Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 19:00:34.840942 containerd[1761]: time="2025-02-13T19:00:34.840894097Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7\""
Feb 13 19:00:34.841602 containerd[1761]: time="2025-02-13T19:00:34.841563296Z" level=info msg="StartContainer for \"f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7\""
Feb 13 19:00:34.869957 systemd[1]: Started cri-containerd-f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7.scope - libcontainer container f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7.
Feb 13 19:00:34.904068 containerd[1761]: time="2025-02-13T19:00:34.904019757Z" level=info msg="StartContainer for \"f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7\" returns successfully"
Feb 13 19:00:34.909500 systemd[1]: cri-containerd-f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7.scope: Deactivated successfully.
Feb 13 19:00:34.974600 containerd[1761]: time="2025-02-13T19:00:34.974384295Z" level=info msg="shim disconnected" id=f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7 namespace=k8s.io
Feb 13 19:00:34.974600 containerd[1761]: time="2025-02-13T19:00:34.974440535Z" level=warning msg="cleaning up after shim disconnected" id=f5ecd030f88fe2aae0f9b4d786bfc3f8ddacccff4762fbbaa7c38c489e1205d7 namespace=k8s.io
Feb 13 19:00:34.974600 containerd[1761]: time="2025-02-13T19:00:34.974449255Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:35.003071 containerd[1761]: time="2025-02-13T19:00:35.002953367Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 19:00:35.055380 containerd[1761]: time="2025-02-13T19:00:35.055152550Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d\""
Feb 13 19:00:35.057531 containerd[1761]: time="2025-02-13T19:00:35.057105230Z" level=info msg="StartContainer for \"d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d\""
Feb 13 19:00:35.063888 sshd[5129]: Accepted publickey for core from 10.200.16.10 port 40748 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:35.068475 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:35.079862 systemd-logind[1729]: New session 29 of user core.
Feb 13 19:00:35.085727 systemd[1]: Started session-29.scope - Session 29 of User core.
Feb 13 19:00:35.090271 systemd[1]: Started cri-containerd-d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d.scope - libcontainer container d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d.
Feb 13 19:00:35.119264 containerd[1761]: time="2025-02-13T19:00:35.118360691Z" level=info msg="StartContainer for \"d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d\" returns successfully"
Feb 13 19:00:35.123764 systemd[1]: cri-containerd-d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d.scope: Deactivated successfully.
Feb 13 19:00:35.157211 containerd[1761]: time="2025-02-13T19:00:35.157143799Z" level=info msg="shim disconnected" id=d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d namespace=k8s.io
Feb 13 19:00:35.157445 containerd[1761]: time="2025-02-13T19:00:35.157429119Z" level=warning msg="cleaning up after shim disconnected" id=d24ab3d8c3f80b28c79be88415865bf16164756c437ad79587e291b2e30d719d namespace=k8s.io
Feb 13 19:00:35.157648 containerd[1761]: time="2025-02-13T19:00:35.157570839Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:35.428721 sshd[5254]: Connection closed by 10.200.16.10 port 40748
Feb 13 19:00:35.429328 sshd-session[5129]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:35.433019 systemd[1]: sshd@26-10.200.20.41:22-10.200.16.10:40748.service: Deactivated successfully.
Feb 13 19:00:35.435136 systemd[1]: session-29.scope: Deactivated successfully.
Feb 13 19:00:35.436051 systemd-logind[1729]: Session 29 logged out. Waiting for processes to exit.
Feb 13 19:00:35.438988 systemd-logind[1729]: Removed session 29.
Feb 13 19:00:35.519989 systemd[1]: Started sshd@27-10.200.20.41:22-10.200.16.10:40756.service - OpenSSH per-connection server daemon (10.200.16.10:40756).
Feb 13 19:00:35.965994 sshd[5304]: Accepted publickey for core from 10.200.16.10 port 40756 ssh2: RSA SHA256:RSLnucAnFMExQ2Qwu8/R/SCFTxGSX/gWsApH+GB+FY0
Feb 13 19:00:35.967315 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 19:00:35.972593 systemd-logind[1729]: New session 30 of user core.
Feb 13 19:00:35.978888 systemd[1]: Started session-30.scope - Session 30 of User core.
Feb 13 19:00:36.005808 containerd[1761]: time="2025-02-13T19:00:36.005605697Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 19:00:36.033502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652852286.mount: Deactivated successfully.
Feb 13 19:00:36.044898 containerd[1761]: time="2025-02-13T19:00:36.044834845Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864\""
Feb 13 19:00:36.046205 containerd[1761]: time="2025-02-13T19:00:36.045432725Z" level=info msg="StartContainer for \"9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864\""
Feb 13 19:00:36.070981 systemd[1]: Started cri-containerd-9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864.scope - libcontainer container 9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864.
Feb 13 19:00:36.099115 systemd[1]: cri-containerd-9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864.scope: Deactivated successfully.
Feb 13 19:00:36.101132 containerd[1761]: time="2025-02-13T19:00:36.101088908Z" level=info msg="StartContainer for \"9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864\" returns successfully"
Feb 13 19:00:36.134337 containerd[1761]: time="2025-02-13T19:00:36.134280978Z" level=info msg="shim disconnected" id=9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864 namespace=k8s.io
Feb 13 19:00:36.134740 containerd[1761]: time="2025-02-13T19:00:36.134606018Z" level=warning msg="cleaning up after shim disconnected" id=9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864 namespace=k8s.io
Feb 13 19:00:36.134740 containerd[1761]: time="2025-02-13T19:00:36.134622938Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:36.642405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d8bc1b7012f39518ce02ec3d1715b1e6650810e2cb9410cc28707ea2ff4e864-rootfs.mount: Deactivated successfully.
Feb 13 19:00:36.665049 kubelet[3324]: E0213 19:00:36.664929    3324 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 19:00:37.011024 containerd[1761]: time="2025-02-13T19:00:37.009965565Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 19:00:37.042625 containerd[1761]: time="2025-02-13T19:00:37.042528922Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610\""
Feb 13 19:00:37.043078 containerd[1761]: time="2025-02-13T19:00:37.043054762Z" level=info msg="StartContainer for \"af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610\""
Feb 13 19:00:37.068907 systemd[1]: Started cri-containerd-af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610.scope - libcontainer container af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610.
Feb 13 19:00:37.090540 systemd[1]: cri-containerd-af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610.scope: Deactivated successfully.
Feb 13 19:00:37.097025 containerd[1761]: time="2025-02-13T19:00:37.096903317Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod489ee3b4_9ec2_40e1_91a0_8ff67d8614da.slice/cri-containerd-af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610.scope/memory.events\": no such file or directory"
Feb 13 19:00:37.098590 containerd[1761]: time="2025-02-13T19:00:37.098487837Z" level=info msg="StartContainer for \"af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610\" returns successfully"
Feb 13 19:00:37.126011 containerd[1761]: time="2025-02-13T19:00:37.125932755Z" level=info msg="shim disconnected" id=af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610 namespace=k8s.io
Feb 13 19:00:37.126011 containerd[1761]: time="2025-02-13T19:00:37.126006835Z" level=warning msg="cleaning up after shim disconnected" id=af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610 namespace=k8s.io
Feb 13 19:00:37.126291 containerd[1761]: time="2025-02-13T19:00:37.126014675Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 19:00:37.642457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af87b23f7ca82a716e75ea5d8d00a96dbf15723832dd05bfc546714e323bc610-rootfs.mount: Deactivated successfully.
Feb 13 19:00:38.014030 containerd[1761]: time="2025-02-13T19:00:38.013821879Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 19:00:38.049906 containerd[1761]: time="2025-02-13T19:00:38.049783275Z" level=info msg="CreateContainer within sandbox \"a6aa48de77a7537f3cd900237dc62c9c77dcb705370a34ae233c58e18fbc4994\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d71c4fffc3866934b608478f4c20f999b99dbe373b5e505de1bcc27cdc65618\""
Feb 13 19:00:38.051780 containerd[1761]: time="2025-02-13T19:00:38.051742235Z" level=info msg="StartContainer for \"3d71c4fffc3866934b608478f4c20f999b99dbe373b5e505de1bcc27cdc65618\""
Feb 13 19:00:38.080966 systemd[1]: Started cri-containerd-3d71c4fffc3866934b608478f4c20f999b99dbe373b5e505de1bcc27cdc65618.scope - libcontainer container 3d71c4fffc3866934b608478f4c20f999b99dbe373b5e505de1bcc27cdc65618.
Feb 13 19:00:38.110888 containerd[1761]: time="2025-02-13T19:00:38.110830950Z" level=info msg="StartContainer for \"3d71c4fffc3866934b608478f4c20f999b99dbe373b5e505de1bcc27cdc65618\" returns successfully"
Feb 13 19:00:38.568715 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Feb 13 19:00:39.039680 kubelet[3324]: I0213 19:00:39.039499    3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l4km8" podStartSLOduration=5.039478711 podStartE2EDuration="5.039478711s" podCreationTimestamp="2025-02-13 19:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:00:39.038217111 +0000 UTC m=+192.582549122" watchObservedRunningTime="2025-02-13 19:00:39.039478711 +0000 UTC m=+192.583810722"
Feb 13 19:00:40.397786 systemd[1]: run-containerd-runc-k8s.io-3d71c4fffc3866934b608478f4c20f999b99dbe373b5e505de1bcc27cdc65618-runc.EVWwQs.mount: Deactivated successfully.
Feb 13 19:00:41.360132 systemd-networkd[1337]: lxc_health: Link UP
Feb 13 19:00:41.379477 systemd-networkd[1337]: lxc_health: Gained carrier
Feb 13 19:00:41.587059 kubelet[3324]: I0213 19:00:41.587007    3324 setters.go:602] "Node became not ready" node="ci-4186.1.1-a-21f48afc48" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:00:41Z","lastTransitionTime":"2025-02-13T19:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 19:00:42.952978 systemd-networkd[1337]: lxc_health: Gained IPv6LL
Feb 13 19:00:49.036505 sshd[5306]: Connection closed by 10.200.16.10 port 40756
Feb 13 19:00:49.035644 sshd-session[5304]: pam_unix(sshd:session): session closed for user core
Feb 13 19:00:49.039369 systemd-logind[1729]: Session 30 logged out. Waiting for processes to exit.
Feb 13 19:00:49.040050 systemd[1]: sshd@27-10.200.20.41:22-10.200.16.10:40756.service: Deactivated successfully.
Feb 13 19:00:49.042544 systemd[1]: session-30.scope: Deactivated successfully.
Feb 13 19:00:49.043572 systemd-logind[1729]: Removed session 30.