Feb 12 20:30:09.111927 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024
Feb 12 20:30:09.112002 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 20:30:09.112021 kernel: BIOS-provided physical RAM map:
Feb 12 20:30:09.112046 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved
Feb 12 20:30:09.112057 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable
Feb 12 20:30:09.112069 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved
Feb 12 20:30:09.112087 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable
Feb 12 20:30:09.112101 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved
Feb 12 20:30:09.112114 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable
Feb 12 20:30:09.112127 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved
Feb 12 20:30:09.112140 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data
Feb 12 20:30:09.112153 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS
Feb 12 20:30:09.112166 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable
Feb 12 20:30:09.112179 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved
Feb 12 20:30:09.112212 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable
Feb 12 20:30:09.112227 kernel: NX (Execute Disable) protection: active
Feb 12 20:30:09.112241 kernel: efi: EFI v2.70 by EDK II
Feb 12 20:30:09.112256 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe379198 RNG=0xbfb73018 TPMEventLog=0xbe2bd018 
Feb 12 20:30:09.112270 kernel: random: crng init done
Feb 12 20:30:09.112285 kernel: SMBIOS 2.4 present.
Feb 12 20:30:09.112299 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
Feb 12 20:30:09.112313 kernel: Hypervisor detected: KVM
Feb 12 20:30:09.112332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 12 20:30:09.112346 kernel: kvm-clock: cpu 0, msr 15faa001, primary cpu clock
Feb 12 20:30:09.112360 kernel: kvm-clock: using sched offset of 13189500659 cycles
Feb 12 20:30:09.112375 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 12 20:30:09.112390 kernel: tsc: Detected 2299.998 MHz processor
Feb 12 20:30:09.112404 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 12 20:30:09.112434 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 12 20:30:09.112451 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000
Feb 12 20:30:09.112475 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 12 20:30:09.112489 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000
Feb 12 20:30:09.112507 kernel: Using GB pages for direct mapping
Feb 12 20:30:09.112521 kernel: Secure boot disabled
Feb 12 20:30:09.112535 kernel: ACPI: Early table checksum verification disabled
Feb 12 20:30:09.112549 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google)
Feb 12 20:30:09.112563 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001      01000013)
Feb 12 20:30:09.112578 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
Feb 12 20:30:09.112592 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
Feb 12 20:30:09.112607 kernel: ACPI: FACS 0x00000000BFBF2000 000040
Feb 12 20:30:09.112632 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217)
Feb 12 20:30:09.112648 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE          00000001 GOOG 00000001)
Feb 12 20:30:09.112677 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001)
Feb 12 20:30:09.112694 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001)
Feb 12 20:30:09.112709 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001)
Feb 12 20:30:09.112725 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
Feb 12 20:30:09.112744 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3]
Feb 12 20:30:09.112759 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63]
Feb 12 20:30:09.112775 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f]
Feb 12 20:30:09.112819 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315]
Feb 12 20:30:09.112839 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033]
Feb 12 20:30:09.112856 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7]
Feb 12 20:30:09.112873 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075]
Feb 12 20:30:09.112889 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f]
Feb 12 20:30:09.112906 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027]
Feb 12 20:30:09.112926 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 12 20:30:09.112943 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 12 20:30:09.112959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Feb 12 20:30:09.113002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
Feb 12 20:30:09.113024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff]
Feb 12 20:30:09.113047 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
Feb 12 20:30:09.113065 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff]
Feb 12 20:30:09.113082 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff]
Feb 12 20:30:09.113099 kernel: Zone ranges:
Feb 12 20:30:09.113119 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 12 20:30:09.113134 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 12 20:30:09.113150 kernel:   Normal   [mem 0x0000000100000000-0x000000021fffffff]
Feb 12 20:30:09.113166 kernel: Movable zone start for each node
Feb 12 20:30:09.113182 kernel: Early memory node ranges
Feb 12 20:30:09.113198 kernel:   node   0: [mem 0x0000000000001000-0x0000000000054fff]
Feb 12 20:30:09.113214 kernel:   node   0: [mem 0x0000000000060000-0x0000000000097fff]
Feb 12 20:30:09.113229 kernel:   node   0: [mem 0x0000000000100000-0x00000000bf8ecfff]
Feb 12 20:30:09.113245 kernel:   node   0: [mem 0x00000000bfbff000-0x00000000bffdffff]
Feb 12 20:30:09.113266 kernel:   node   0: [mem 0x0000000100000000-0x000000021fffffff]
Feb 12 20:30:09.113281 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff]
Feb 12 20:30:09.113297 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 12 20:30:09.113313 kernel: On node 0, zone DMA: 11 pages in unavailable ranges
Feb 12 20:30:09.113328 kernel: On node 0, zone DMA: 104 pages in unavailable ranges
Feb 12 20:30:09.113344 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges
Feb 12 20:30:09.113361 kernel: On node 0, zone Normal: 32 pages in unavailable ranges
Feb 12 20:30:09.113376 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 12 20:30:09.113392 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 12 20:30:09.113412 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 12 20:30:09.113428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 12 20:30:09.113444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 12 20:30:09.113461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 12 20:30:09.113485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 12 20:30:09.113501 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 12 20:30:09.113517 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 12 20:30:09.113534 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices
Feb 12 20:30:09.113549 kernel: Booting paravirtualized kernel on KVM
Feb 12 20:30:09.113569 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 12 20:30:09.113585 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Feb 12 20:30:09.113600 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Feb 12 20:30:09.113616 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Feb 12 20:30:09.113632 kernel: pcpu-alloc: [0] 0 1 
Feb 12 20:30:09.113647 kernel: kvm-guest: PV spinlocks enabled
Feb 12 20:30:09.113663 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 12 20:30:09.113679 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1931256
Feb 12 20:30:09.113695 kernel: Policy zone: Normal
Feb 12 20:30:09.113716 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 20:30:09.113732 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 12 20:30:09.113748 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 12 20:30:09.113764 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 12 20:30:09.113781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 12 20:30:09.113797 kernel: Memory: 7536508K/7860584K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 323816K reserved, 0K cma-reserved)
Feb 12 20:30:09.113813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 12 20:30:09.113829 kernel: Kernel/User page tables isolation: enabled
Feb 12 20:30:09.113849 kernel: ftrace: allocating 34475 entries in 135 pages
Feb 12 20:30:09.113865 kernel: ftrace: allocated 135 pages with 4 groups
Feb 12 20:30:09.113881 kernel: rcu: Hierarchical RCU implementation.
Feb 12 20:30:09.113898 kernel: rcu:         RCU event tracing is enabled.
Feb 12 20:30:09.113914 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 12 20:30:09.113930 kernel:         Rude variant of Tasks RCU enabled.
Feb 12 20:30:09.113946 kernel:         Tracing variant of Tasks RCU enabled.
Feb 12 20:30:09.113963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 12 20:30:09.113993 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 12 20:30:09.114015 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 12 20:30:09.114044 kernel: Console: colour dummy device 80x25
Feb 12 20:30:09.114060 kernel: printk: console [ttyS0] enabled
Feb 12 20:30:09.114081 kernel: ACPI: Core revision 20210730
Feb 12 20:30:09.114097 kernel: APIC: Switch to symmetric I/O mode setup
Feb 12 20:30:09.114114 kernel: x2apic enabled
Feb 12 20:30:09.114131 kernel: Switched APIC routing to physical x2apic.
Feb 12 20:30:09.114147 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1
Feb 12 20:30:09.114164 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Feb 12 20:30:09.114182 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998)
Feb 12 20:30:09.114203 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
Feb 12 20:30:09.114219 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
Feb 12 20:30:09.114236 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 12 20:30:09.114253 kernel: Spectre V2 : Mitigation: IBRS
Feb 12 20:30:09.114270 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 12 20:30:09.114287 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 12 20:30:09.114309 kernel: RETBleed: Mitigation: IBRS
Feb 12 20:30:09.114326 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 12 20:30:09.114343 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl
Feb 12 20:30:09.114360 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Feb 12 20:30:09.114377 kernel: MDS: Mitigation: Clear CPU buffers
Feb 12 20:30:09.114394 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 12 20:30:09.114411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 12 20:30:09.114428 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 12 20:30:09.114445 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 12 20:30:09.114465 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 12 20:30:09.114491 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb 12 20:30:09.114508 kernel: Freeing SMP alternatives memory: 32K
Feb 12 20:30:09.114524 kernel: pid_max: default: 32768 minimum: 301
Feb 12 20:30:09.114541 kernel: LSM: Security Framework initializing
Feb 12 20:30:09.114558 kernel: SELinux:  Initializing.
Feb 12 20:30:09.114574 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 12 20:30:09.114591 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 12 20:30:09.114609 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0)
Feb 12 20:30:09.114630 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only.
Feb 12 20:30:09.114646 kernel: signal: max sigframe size: 1776
Feb 12 20:30:09.114663 kernel: rcu: Hierarchical SRCU implementation.
Feb 12 20:30:09.114679 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 12 20:30:09.114695 kernel: smp: Bringing up secondary CPUs ...
Feb 12 20:30:09.114711 kernel: x86: Booting SMP configuration:
Feb 12 20:30:09.114727 kernel: .... node  #0, CPUs:      #1
Feb 12 20:30:09.114743 kernel: kvm-clock: cpu 1, msr 15faa041, secondary cpu clock
Feb 12 20:30:09.114759 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 12 20:30:09.114780 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 12 20:30:09.114796 kernel: smp: Brought up 1 node, 2 CPUs
Feb 12 20:30:09.114812 kernel: smpboot: Max logical packages: 1
Feb 12 20:30:09.114828 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS)
Feb 12 20:30:09.114844 kernel: devtmpfs: initialized
Feb 12 20:30:09.114861 kernel: x86/mm: Memory block size: 128MB
Feb 12 20:30:09.114877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes)
Feb 12 20:30:09.114894 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 12 20:30:09.114912 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 12 20:30:09.114934 kernel: pinctrl core: initialized pinctrl subsystem
Feb 12 20:30:09.114951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 12 20:30:09.114982 kernel: audit: initializing netlink subsys (disabled)
Feb 12 20:30:09.115001 kernel: audit: type=2000 audit(1707769807.655:1): state=initialized audit_enabled=0 res=1
Feb 12 20:30:09.115018 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 12 20:30:09.115035 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 12 20:30:09.115053 kernel: cpuidle: using governor menu
Feb 12 20:30:09.115070 kernel: ACPI: bus type PCI registered
Feb 12 20:30:09.115088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 12 20:30:09.115111 kernel: dca service started, version 1.12.1
Feb 12 20:30:09.115128 kernel: PCI: Using configuration type 1 for base access
Feb 12 20:30:09.115146 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 12 20:30:09.115164 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb 12 20:30:09.115181 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb 12 20:30:09.115199 kernel: ACPI: Added _OSI(Module Device)
Feb 12 20:30:09.115216 kernel: ACPI: Added _OSI(Processor Device)
Feb 12 20:30:09.115233 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 12 20:30:09.115251 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 12 20:30:09.115272 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb 12 20:30:09.115289 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb 12 20:30:09.115304 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb 12 20:30:09.115319 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 12 20:30:09.115334 kernel: ACPI: Interpreter enabled
Feb 12 20:30:09.115350 kernel: ACPI: PM: (supports S0 S3 S5)
Feb 12 20:30:09.115365 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 12 20:30:09.115381 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 12 20:30:09.115397 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 12 20:30:09.115417 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 12 20:30:09.115654 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 12 20:30:09.115813 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Feb 12 20:30:09.115833 kernel: PCI host bridge to bus 0000:00
Feb 12 20:30:09.118009 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 12 20:30:09.118214 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 12 20:30:09.118404 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 12 20:30:09.118583 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window]
Feb 12 20:30:09.118738 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 12 20:30:09.118932 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 12 20:30:09.119179 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100
Feb 12 20:30:09.119357 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb 12 20:30:09.119535 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 12 20:30:09.119735 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000
Feb 12 20:30:09.119908 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc07f]
Feb 12 20:30:09.120153 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f]
Feb 12 20:30:09.120332 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Feb 12 20:30:09.120512 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc03f]
Feb 12 20:30:09.120675 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f]
Feb 12 20:30:09.120854 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00
Feb 12 20:30:09.121041 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Feb 12 20:30:09.121206 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f]
Feb 12 20:30:09.121229 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 12 20:30:09.121248 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 12 20:30:09.121266 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 12 20:30:09.121283 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 12 20:30:09.121301 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 12 20:30:09.121324 kernel: iommu: Default domain type: Translated 
Feb 12 20:30:09.121341 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb 12 20:30:09.121358 kernel: vgaarb: loaded
Feb 12 20:30:09.121376 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 12 20:30:09.121394 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 12 20:30:09.121411 kernel: PTP clock support registered
Feb 12 20:30:09.121428 kernel: Registered efivars operations
Feb 12 20:30:09.121445 kernel: PCI: Using ACPI for IRQ routing
Feb 12 20:30:09.121462 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 12 20:30:09.121493 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff]
Feb 12 20:30:09.121510 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff]
Feb 12 20:30:09.121528 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff]
Feb 12 20:30:09.121555 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff]
Feb 12 20:30:09.121573 kernel: clocksource: Switched to clocksource kvm-clock
Feb 12 20:30:09.121590 kernel: VFS: Disk quotas dquot_6.6.0
Feb 12 20:30:09.121609 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 12 20:30:09.121626 kernel: pnp: PnP ACPI init
Feb 12 20:30:09.121644 kernel: pnp: PnP ACPI: found 7 devices
Feb 12 20:30:09.121666 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 12 20:30:09.121682 kernel: NET: Registered PF_INET protocol family
Feb 12 20:30:09.121699 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 12 20:30:09.121717 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 12 20:30:09.121734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 12 20:30:09.121752 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 12 20:30:09.121770 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 12 20:30:09.121787 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 12 20:30:09.121805 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 12 20:30:09.121826 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 12 20:30:09.121843 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 12 20:30:09.121861 kernel: NET: Registered PF_XDP protocol family
Feb 12 20:30:09.122034 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 12 20:30:09.122177 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 12 20:30:09.122313 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 12 20:30:09.122447 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window]
Feb 12 20:30:09.122617 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 12 20:30:09.122646 kernel: PCI: CLS 0 bytes, default 64
Feb 12 20:30:09.122664 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 12 20:30:09.122682 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB)
Feb 12 20:30:09.122700 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 12 20:30:09.122718 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Feb 12 20:30:09.122736 kernel: clocksource: Switched to clocksource tsc
Feb 12 20:30:09.122754 kernel: Initialise system trusted keyrings
Feb 12 20:30:09.122771 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Feb 12 20:30:09.122792 kernel: Key type asymmetric registered
Feb 12 20:30:09.122809 kernel: Asymmetric key parser 'x509' registered
Feb 12 20:30:09.122827 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb 12 20:30:09.122845 kernel: io scheduler mq-deadline registered
Feb 12 20:30:09.122862 kernel: io scheduler kyber registered
Feb 12 20:30:09.122880 kernel: io scheduler bfq registered
Feb 12 20:30:09.122897 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 12 20:30:09.122916 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 12 20:30:09.127846 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Feb 12 20:30:09.127889 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Feb 12 20:30:09.128118 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Feb 12 20:30:09.128145 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 12 20:30:09.128325 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Feb 12 20:30:09.128374 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 12 20:30:09.128393 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 12 20:30:09.128411 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Feb 12 20:30:09.128429 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A
Feb 12 20:30:09.128447 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A
Feb 12 20:30:09.128639 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0)
Feb 12 20:30:09.128663 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 12 20:30:09.128677 kernel: i8042: Warning: Keylock active
Feb 12 20:30:09.128692 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 12 20:30:09.128709 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 12 20:30:09.128867 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 12 20:30:09.136423 kernel: rtc_cmos 00:00: registered as rtc0
Feb 12 20:30:09.136620 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T20:30:08 UTC (1707769808)
Feb 12 20:30:09.136765 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 12 20:30:09.136787 kernel: intel_pstate: CPU model not supported
Feb 12 20:30:09.136804 kernel: pstore: Registered efi as persistent store backend
Feb 12 20:30:09.136821 kernel: NET: Registered PF_INET6 protocol family
Feb 12 20:30:09.136837 kernel: Segment Routing with IPv6
Feb 12 20:30:09.136854 kernel: In-situ OAM (IOAM) with IPv6
Feb 12 20:30:09.136871 kernel: NET: Registered PF_PACKET protocol family
Feb 12 20:30:09.136888 kernel: Key type dns_resolver registered
Feb 12 20:30:09.136909 kernel: IPI shorthand broadcast: enabled
Feb 12 20:30:09.136926 kernel: sched_clock: Marking stable (766765834, 188574220)->(1029512441, -74172387)
Feb 12 20:30:09.136943 kernel: registered taskstats version 1
Feb 12 20:30:09.136959 kernel: Loading compiled-in X.509 certificates
Feb 12 20:30:09.137005 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 12 20:30:09.137032 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8'
Feb 12 20:30:09.137049 kernel: Key type .fscrypt registered
Feb 12 20:30:09.137065 kernel: Key type fscrypt-provisioning registered
Feb 12 20:30:09.137082 kernel: pstore: Using crash dump compression: deflate
Feb 12 20:30:09.137103 kernel: ima: Allocated hash algorithm: sha1
Feb 12 20:30:09.137119 kernel: ima: No architecture policies found
Feb 12 20:30:09.137136 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb 12 20:30:09.137152 kernel: Write protecting the kernel read-only data: 28672k
Feb 12 20:30:09.137168 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb 12 20:30:09.137185 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb 12 20:30:09.137201 kernel: Run /init as init process
Feb 12 20:30:09.137219 kernel:   with arguments:
Feb 12 20:30:09.137238 kernel:     /init
Feb 12 20:30:09.137253 kernel:   with environment:
Feb 12 20:30:09.137269 kernel:     HOME=/
Feb 12 20:30:09.137285 kernel:     TERM=linux
Feb 12 20:30:09.137302 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 12 20:30:09.137323 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 20:30:09.137343 systemd[1]: Detected virtualization kvm.
Feb 12 20:30:09.137360 systemd[1]: Detected architecture x86-64.
Feb 12 20:30:09.137380 systemd[1]: Running in initrd.
Feb 12 20:30:09.137398 systemd[1]: No hostname configured, using default hostname.
Feb 12 20:30:09.137414 systemd[1]: Hostname set to <localhost>.
Feb 12 20:30:09.137432 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 20:30:09.137450 systemd[1]: Queued start job for default target initrd.target.
Feb 12 20:30:09.137467 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 20:30:09.137484 systemd[1]: Reached target cryptsetup.target.
Feb 12 20:30:09.137501 systemd[1]: Reached target paths.target.
Feb 12 20:30:09.137520 systemd[1]: Reached target slices.target.
Feb 12 20:30:09.137538 systemd[1]: Reached target swap.target.
Feb 12 20:30:09.137555 systemd[1]: Reached target timers.target.
Feb 12 20:30:09.137573 systemd[1]: Listening on iscsid.socket.
Feb 12 20:30:09.137590 systemd[1]: Listening on iscsiuio.socket.
Feb 12 20:30:09.137606 systemd[1]: Listening on systemd-journald-audit.socket.
Feb 12 20:30:09.137621 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb 12 20:30:09.137639 systemd[1]: Listening on systemd-journald.socket.
Feb 12 20:30:09.137655 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 20:30:09.137671 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 20:30:09.137819 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 20:30:09.137835 systemd[1]: Reached target sockets.target.
Feb 12 20:30:09.137853 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 20:30:09.137871 systemd[1]: Finished network-cleanup.service.
Feb 12 20:30:09.137889 systemd[1]: Starting systemd-fsck-usr.service...
Feb 12 20:30:09.137907 systemd[1]: Starting systemd-journald.service...
Feb 12 20:30:09.138078 systemd[1]: Starting systemd-modules-load.service...
Feb 12 20:30:09.138097 systemd[1]: Starting systemd-resolved.service...
Feb 12 20:30:09.138116 systemd[1]: Starting systemd-vconsole-setup.service...
Feb 12 20:30:09.138277 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 20:30:09.138300 systemd[1]: Finished systemd-fsck-usr.service.
Feb 12 20:30:09.138320 kernel: audit: type=1130 audit(1707769809.121:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.138340 kernel: audit: type=1130 audit(1707769809.130:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.138481 systemd[1]: Finished systemd-vconsole-setup.service.
Feb 12 20:30:09.138506 systemd-journald[189]: Journal started
Feb 12 20:30:09.138699 systemd-journald[189]: Runtime Journal (/run/log/journal/b841b5101d2b7d932c3aaa5b6302eae7) is 8.0M, max 148.8M, 140.8M free.
Feb 12 20:30:09.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.144006 systemd[1]: Started systemd-journald.service.
Feb 12 20:30:09.144205 kernel: audit: type=1130 audit(1707769809.137:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.144846 systemd-modules-load[190]: Inserted module 'overlay'
Feb 12 20:30:09.156105 kernel: audit: type=1130 audit(1707769809.147:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.150720 systemd[1]: Starting dracut-cmdline-ask.service...
Feb 12 20:30:09.161928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 20:30:09.188524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 20:30:09.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.194006 kernel: audit: type=1130 audit(1707769809.186:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.201998 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 12 20:30:09.203415 systemd-resolved[191]: Positive Trust Anchors:
Feb 12 20:30:09.203883 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 20:30:09.204067 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 20:30:09.208530 systemd-modules-load[190]: Inserted module 'br_netfilter'
Feb 12 20:30:09.215533 kernel: Bridge firewalling registered
Feb 12 20:30:09.211645 systemd[1]: Finished dracut-cmdline-ask.service.
Feb 12 20:30:09.243434 kernel: audit: type=1130 audit(1707769809.217:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.243474 kernel: audit: type=1130 audit(1707769809.225:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.243498 kernel: SCSI subsystem initialized
Feb 12 20:30:09.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.212497 systemd-resolved[191]: Defaulting to hostname 'linux'.
Feb 12 20:30:09.219310 systemd[1]: Started systemd-resolved.service.
Feb 12 20:30:09.227243 systemd[1]: Reached target nss-lookup.target.
Feb 12 20:30:09.257341 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 12 20:30:09.257387 kernel: device-mapper: uevent: version 1.0.3
Feb 12 20:30:09.257411 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb 12 20:30:09.237442 systemd[1]: Starting dracut-cmdline.service...
Feb 12 20:30:09.259599 systemd-modules-load[190]: Inserted module 'dm_multipath'
Feb 12 20:30:09.266132 dracut-cmdline[207]: dracut-dracut-053
Feb 12 20:30:09.266132 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4
Feb 12 20:30:09.260693 systemd[1]: Finished systemd-modules-load.service.
Feb 12 20:30:09.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.288765 systemd[1]: Starting systemd-sysctl.service...
Feb 12 20:30:09.290176 kernel: audit: type=1130 audit(1707769809.283:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.301919 systemd[1]: Finished systemd-sysctl.service.
Feb 12 20:30:09.314125 kernel: audit: type=1130 audit(1707769809.305:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.351993 kernel: Loading iSCSI transport class v2.0-870.
Feb 12 20:30:09.366031 kernel: iscsi: registered transport (tcp)
Feb 12 20:30:09.390456 kernel: iscsi: registered transport (qla4xxx)
Feb 12 20:30:09.390543 kernel: QLogic iSCSI HBA Driver
Feb 12 20:30:09.436364 systemd[1]: Finished dracut-cmdline.service.
Feb 12 20:30:09.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.438013 systemd[1]: Starting dracut-pre-udev.service...
Feb 12 20:30:09.496033 kernel: raid6: avx2x4   gen() 18101 MB/s
Feb 12 20:30:09.513019 kernel: raid6: avx2x4   xor()  7560 MB/s
Feb 12 20:30:09.530001 kernel: raid6: avx2x2   gen() 17944 MB/s
Feb 12 20:30:09.547023 kernel: raid6: avx2x2   xor() 18662 MB/s
Feb 12 20:30:09.565017 kernel: raid6: avx2x1   gen() 14135 MB/s
Feb 12 20:30:09.582020 kernel: raid6: avx2x1   xor() 16174 MB/s
Feb 12 20:30:09.600045 kernel: raid6: sse2x4   gen() 11065 MB/s
Feb 12 20:30:09.617052 kernel: raid6: sse2x4   xor()  6621 MB/s
Feb 12 20:30:09.634027 kernel: raid6: sse2x2   gen() 11677 MB/s
Feb 12 20:30:09.651014 kernel: raid6: sse2x2   xor()  7381 MB/s
Feb 12 20:30:09.668015 kernel: raid6: sse2x1   gen() 10456 MB/s
Feb 12 20:30:09.685795 kernel: raid6: sse2x1   xor()  5135 MB/s
Feb 12 20:30:09.685891 kernel: raid6: using algorithm avx2x4 gen() 18101 MB/s
Feb 12 20:30:09.685919 kernel: raid6: .... xor() 7560 MB/s, rmw enabled
Feb 12 20:30:09.686616 kernel: raid6: using avx2x2 recovery algorithm
Feb 12 20:30:09.702008 kernel: xor: automatically using best checksumming function   avx       
Feb 12 20:30:09.811019 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb 12 20:30:09.822401 systemd[1]: Finished dracut-pre-udev.service.
Feb 12 20:30:09.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.825000 audit: BPF prog-id=7 op=LOAD
Feb 12 20:30:09.825000 audit: BPF prog-id=8 op=LOAD
Feb 12 20:30:09.827669 systemd[1]: Starting systemd-udevd.service...
Feb 12 20:30:09.844405 systemd-udevd[389]: Using default interface naming scheme 'v252'.
Feb 12 20:30:09.851520 systemd[1]: Started systemd-udevd.service.
Feb 12 20:30:09.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.854496 systemd[1]: Starting dracut-pre-trigger.service...
Feb 12 20:30:09.874922 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation
Feb 12 20:30:09.912582 systemd[1]: Finished dracut-pre-trigger.service.
Feb 12 20:30:09.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:09.917359 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 20:30:09.984703 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 20:30:09.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:10.072010 kernel: cryptd: max_cpu_qlen set to 1000
Feb 12 20:30:10.104997 kernel: scsi host0: Virtio SCSI HBA
Feb 12 20:30:10.159000 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 12 20:30:10.159088 kernel: AES CTR mode by8 optimization enabled
Feb 12 20:30:10.181620 kernel: scsi 0:0:1:0: Direct-Access     Google   PersistentDisk   1    PQ: 0 ANSI: 6
Feb 12 20:30:10.256002 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB)
Feb 12 20:30:10.256351 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks
Feb 12 20:30:10.256555 kernel: sd 0:0:1:0: [sda] Write Protect is off
Feb 12 20:30:10.270575 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08
Feb 12 20:30:10.270952 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Feb 12 20:30:10.288815 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 12 20:30:10.288907 kernel: GPT:17805311 != 25165823
Feb 12 20:30:10.288930 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 12 20:30:10.294961 kernel: GPT:17805311 != 25165823
Feb 12 20:30:10.298704 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 12 20:30:10.311170 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 12 20:30:10.311248 kernel: sd 0:0:1:0: [sda] Attached SCSI disk
Feb 12 20:30:10.368004 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441)
Feb 12 20:30:10.385796 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb 12 20:30:10.400436 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb 12 20:30:10.416126 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb 12 20:30:10.452961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb 12 20:30:10.458515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 20:30:10.484298 systemd[1]: Starting disk-uuid.service...
Feb 12 20:30:10.498346 disk-uuid[510]: Primary Header is updated.
Feb 12 20:30:10.498346 disk-uuid[510]: Secondary Entries is updated.
Feb 12 20:30:10.498346 disk-uuid[510]: Secondary Header is updated.
Feb 12 20:30:10.525156 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 12 20:30:10.544012 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 12 20:30:10.570000 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 12 20:30:11.564575 disk-uuid[511]: The operation has completed successfully.
Feb 12 20:30:11.573130 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 12 20:30:11.631682 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 12 20:30:11.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:11.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:11.631811 systemd[1]: Finished disk-uuid.service.
Feb 12 20:30:11.643056 systemd[1]: Starting verity-setup.service...
Feb 12 20:30:11.677152 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 12 20:30:11.748576 systemd[1]: Found device dev-mapper-usr.device.
Feb 12 20:30:11.750376 systemd[1]: Mounting sysusr-usr.mount...
Feb 12 20:30:11.763720 systemd[1]: Finished verity-setup.service.
Feb 12 20:30:11.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:11.852705 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb 12 20:30:11.852607 systemd[1]: Mounted sysusr-usr.mount.
Feb 12 20:30:11.860474 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb 12 20:30:11.907159 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 20:30:11.907204 kernel: BTRFS info (device sda6): using free space tree
Feb 12 20:30:11.907235 kernel: BTRFS info (device sda6): has skinny extents
Feb 12 20:30:11.861463 systemd[1]: Starting ignition-setup.service...
Feb 12 20:30:11.921145 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 12 20:30:11.876353 systemd[1]: Starting parse-ip-for-networkd.service...
Feb 12 20:30:11.950428 systemd[1]: Finished ignition-setup.service.
Feb 12 20:30:11.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:11.952138 systemd[1]: Starting ignition-fetch-offline.service...
Feb 12 20:30:12.002402 systemd[1]: Finished parse-ip-for-networkd.service.
Feb 12 20:30:12.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.001000 audit: BPF prog-id=9 op=LOAD
Feb 12 20:30:12.004452 systemd[1]: Starting systemd-networkd.service...
Feb 12 20:30:12.036694 systemd-networkd[685]: lo: Link UP
Feb 12 20:30:12.036706 systemd-networkd[685]: lo: Gained carrier
Feb 12 20:30:12.038514 systemd-networkd[685]: Enumeration completed
Feb 12 20:30:12.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.038965 systemd[1]: Started systemd-networkd.service.
Feb 12 20:30:12.039679 systemd-networkd[685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 20:30:12.041785 systemd-networkd[685]: eth0: Link UP
Feb 12 20:30:12.041792 systemd-networkd[685]: eth0: Gained carrier
Feb 12 20:30:12.052455 systemd[1]: Reached target network.target.
Feb 12 20:30:12.053141 systemd-networkd[685]: eth0: DHCPv4 address 10.128.0.56/32, gateway 10.128.0.1 acquired from 169.254.169.254
Feb 12 20:30:12.075372 systemd[1]: Starting iscsiuio.service...
Feb 12 20:30:12.141246 systemd[1]: Started iscsiuio.service.
Feb 12 20:30:12.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.149518 systemd[1]: Starting iscsid.service...
Feb 12 20:30:12.162134 iscsid[694]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 20:30:12.162134 iscsid[694]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log
Feb 12 20:30:12.162134 iscsid[694]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb 12 20:30:12.162134 iscsid[694]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb 12 20:30:12.162134 iscsid[694]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb 12 20:30:12.162134 iscsid[694]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb 12 20:30:12.162134 iscsid[694]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb 12 20:30:12.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.263361 ignition[635]: Ignition 2.14.0
Feb 12 20:30:12.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.169230 systemd[1]: Started iscsid.service.
Feb 12 20:30:12.263375 ignition[635]: Stage: fetch-offline
Feb 12 20:30:12.177347 systemd[1]: Starting dracut-initqueue.service...
Feb 12 20:30:12.263454 ignition[635]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:12.196526 systemd[1]: Finished dracut-initqueue.service.
Feb 12 20:30:12.263495 ignition[635]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:12.223521 systemd[1]: Reached target remote-fs-pre.target.
Feb 12 20:30:12.290004 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:12.266135 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 20:30:12.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.290361 ignition[635]: parsed url from cmdline: ""
Feb 12 20:30:12.275133 systemd[1]: Reached target remote-fs.target.
Feb 12 20:30:12.290369 ignition[635]: no config URL provided
Feb 12 20:30:12.284529 systemd[1]: Starting dracut-pre-mount.service...
Feb 12 20:30:12.290381 ignition[635]: reading system config file "/usr/lib/ignition/user.ign"
Feb 12 20:30:12.293629 systemd[1]: Finished ignition-fetch-offline.service.
Feb 12 20:30:12.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.290396 ignition[635]: no config at "/usr/lib/ignition/user.ign"
Feb 12 20:30:12.318497 systemd[1]: Finished dracut-pre-mount.service.
Feb 12 20:30:12.290409 ignition[635]: failed to fetch config: resource requires networking
Feb 12 20:30:12.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.339508 systemd[1]: Starting ignition-fetch.service...
Feb 12 20:30:12.291105 ignition[635]: Ignition finished successfully
Feb 12 20:30:12.382941 unknown[709]: fetched base config from "system"
Feb 12 20:30:12.350105 ignition[709]: Ignition 2.14.0
Feb 12 20:30:12.382953 unknown[709]: fetched base config from "system"
Feb 12 20:30:12.350114 ignition[709]: Stage: fetch
Feb 12 20:30:12.382963 unknown[709]: fetched user config from "gcp"
Feb 12 20:30:12.350239 ignition[709]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:12.399548 systemd[1]: Finished ignition-fetch.service.
Feb 12 20:30:12.350268 ignition[709]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:12.419401 systemd[1]: Starting ignition-kargs.service...
Feb 12 20:30:12.357543 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:12.458647 systemd[1]: Finished ignition-kargs.service.
Feb 12 20:30:12.357724 ignition[709]: parsed url from cmdline: ""
Feb 12 20:30:12.469503 systemd[1]: Starting ignition-disks.service...
Feb 12 20:30:12.357731 ignition[709]: no config URL provided
Feb 12 20:30:12.491565 systemd[1]: Finished ignition-disks.service.
Feb 12 20:30:12.357742 ignition[709]: reading system config file "/usr/lib/ignition/user.ign"
Feb 12 20:30:12.512488 systemd[1]: Reached target initrd-root-device.target.
Feb 12 20:30:12.357756 ignition[709]: no config at "/usr/lib/ignition/user.ign"
Feb 12 20:30:12.530200 systemd[1]: Reached target local-fs-pre.target.
Feb 12 20:30:12.357792 ignition[709]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1
Feb 12 20:30:12.530310 systemd[1]: Reached target local-fs.target.
Feb 12 20:30:12.362685 ignition[709]: GET result: OK
Feb 12 20:30:12.552162 systemd[1]: Reached target sysinit.target.
Feb 12 20:30:12.362741 ignition[709]: parsing config with SHA512: 7a663924e70946a8503e755ab30ea3215e3d10269a02335638ab44b78a71b4a2c6919f6edfa6cbb3c844c4c05190bd8700d97bbb1d78e614b94ede554ed39144
Feb 12 20:30:12.565146 systemd[1]: Reached target basic.target.
Feb 12 20:30:12.384113 ignition[709]: fetch: fetch complete
Feb 12 20:30:12.579312 systemd[1]: Starting systemd-fsck-root.service...
Feb 12 20:30:12.384122 ignition[709]: fetch: fetch passed
Feb 12 20:30:12.384166 ignition[709]: Ignition finished successfully
Feb 12 20:30:12.433403 ignition[715]: Ignition 2.14.0
Feb 12 20:30:12.433413 ignition[715]: Stage: kargs
Feb 12 20:30:12.433545 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:12.433575 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:12.441002 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:12.442535 ignition[715]: kargs: kargs passed
Feb 12 20:30:12.442592 ignition[715]: Ignition finished successfully
Feb 12 20:30:12.480699 ignition[721]: Ignition 2.14.0
Feb 12 20:30:12.480708 ignition[721]: Stage: disks
Feb 12 20:30:12.480835 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:12.480865 ignition[721]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:12.488451 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:12.489807 ignition[721]: disks: disks passed
Feb 12 20:30:12.489856 ignition[721]: Ignition finished successfully
Feb 12 20:30:12.618965 systemd-fsck[729]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks
Feb 12 20:30:12.813093 systemd[1]: Finished systemd-fsck-root.service.
Feb 12 20:30:12.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:12.823404 systemd[1]: Mounting sysroot.mount...
Feb 12 20:30:12.852180 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 12 20:30:12.851300 systemd[1]: Mounted sysroot.mount.
Feb 12 20:30:12.859497 systemd[1]: Reached target initrd-root-fs.target.
Feb 12 20:30:12.878471 systemd[1]: Mounting sysroot-usr.mount...
Feb 12 20:30:12.889703 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb 12 20:30:12.889761 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 12 20:30:12.889795 systemd[1]: Reached target ignition-diskful.target.
Feb 12 20:30:12.977127 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (735)
Feb 12 20:30:12.977158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 20:30:12.977174 kernel: BTRFS info (device sda6): using free space tree
Feb 12 20:30:12.977229 kernel: BTRFS info (device sda6): has skinny extents
Feb 12 20:30:12.906494 systemd[1]: Mounted sysroot-usr.mount.
Feb 12 20:30:12.929257 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 20:30:12.999168 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 12 20:30:13.007222 systemd[1]: Starting initrd-setup-root.service...
Feb 12 20:30:13.018517 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 20:30:13.040326 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory
Feb 12 20:30:13.050150 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory
Feb 12 20:30:13.061116 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory
Feb 12 20:30:13.071109 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 12 20:30:13.122223 systemd[1]: Finished initrd-setup-root.service.
Feb 12 20:30:13.163172 kernel: kauditd_printk_skb: 23 callbacks suppressed
Feb 12 20:30:13.163274 kernel: audit: type=1130 audit(1707769813.120:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:13.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:13.123899 systemd[1]: Starting ignition-mount.service...
Feb 12 20:30:13.171328 systemd[1]: Starting sysroot-boot.service...
Feb 12 20:30:13.185448 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Feb 12 20:30:13.185608 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Feb 12 20:30:13.212180 ignition[802]: INFO     : Ignition 2.14.0
Feb 12 20:30:13.212180 ignition[802]: INFO     : Stage: mount
Feb 12 20:30:13.212180 ignition[802]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:13.212180 ignition[802]: DEBUG    : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:13.313173 kernel: audit: type=1130 audit(1707769813.232:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:13.313233 kernel: audit: type=1130 audit(1707769813.271:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:13.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:13.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:13.215577 systemd[1]: Finished sysroot-boot.service.
Feb 12 20:30:13.328152 ignition[802]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:13.328152 ignition[802]: INFO     : mount: mount passed
Feb 12 20:30:13.328152 ignition[802]: INFO     : Ignition finished successfully
Feb 12 20:30:13.391185 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (811)
Feb 12 20:30:13.391290 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 12 20:30:13.391384 kernel: BTRFS info (device sda6): using free space tree
Feb 12 20:30:13.391416 kernel: BTRFS info (device sda6): has skinny extents
Feb 12 20:30:13.391432 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 12 20:30:13.234697 systemd[1]: Finished ignition-mount.service.
Feb 12 20:30:13.274595 systemd[1]: Starting ignition-files.service...
Feb 12 20:30:13.325523 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb 12 20:30:13.422202 ignition[830]: INFO     : Ignition 2.14.0
Feb 12 20:30:13.422202 ignition[830]: INFO     : Stage: files
Feb 12 20:30:13.422202 ignition[830]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:13.422202 ignition[830]: DEBUG    : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:13.422202 ignition[830]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:13.489132 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (834)
Feb 12 20:30:13.387683 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb 12 20:30:13.498171 ignition[830]: DEBUG    : files: compiled without relabeling support, skipping
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 12 20:30:13.498171 ignition[830]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/hosts"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(4): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1898754583"
Feb 12 20:30:13.498171 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1898754583": device or resource busy
Feb 12 20:30:13.498171 ignition[830]: ERROR    : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1898754583", trying btrfs: device or resource busy
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(5): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1898754583"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1898754583"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(6): [started]  unmounting "/mnt/oem1898754583"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1898754583"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts"
Feb 12 20:30:13.498171 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb 12 20:30:13.440235 unknown[830]: wrote ssh authorized keys file for user: core
Feb 12 20:30:13.760114 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1
Feb 12 20:30:13.648241 systemd-networkd[685]: eth0: Gained IPv6LL
Feb 12 20:30:13.789101 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb 12 20:30:14.039022 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540
Feb 12 20:30:14.063122 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb 12 20:30:14.063122 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb 12 20:30:14.063122 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1
Feb 12 20:30:14.230861 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET result: OK
Feb 12 20:30:14.342027 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(a): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1209979229"
Feb 12 20:30:14.368127 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1209979229": device or resource busy
Feb 12 20:30:14.368127 ignition[830]: ERROR    : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1209979229", trying btrfs: device or resource busy
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(b): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1209979229"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1209979229"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(c): [started]  unmounting "/mnt/oem1209979229"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem1209979229"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb 12 20:30:14.368127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(d): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1
Feb 12 20:30:14.588151 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(d): GET result: OK
Feb 12 20:30:14.761335 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(d): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836
Feb 12 20:30:14.786136 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb 12 20:30:14.786136 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb 12 20:30:14.786136 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(e): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1
Feb 12 20:30:14.786136 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(e): GET result: OK
Feb 12 20:30:15.308803 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(e): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/home/core/install.sh"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(11): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): [started]  writing file "/sysroot/etc/systemd/system/oem-gce.service"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): op(13): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem856684514"
Feb 12 20:30:15.333142 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(12): op(13): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem856684514": device or resource busy
Feb 12 20:30:15.333142 ignition[830]: ERROR    : files: createFilesystemsFiles: createFiles: op(12): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem856684514", trying btrfs: device or resource busy
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): op(14): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem856684514"
Feb 12 20:30:15.333142 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): op(14): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem856684514"
Feb 12 20:30:15.689165 kernel: audit: type=1130 audit(1707769815.389:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.689218 kernel: audit: type=1130 audit(1707769815.498:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.689247 kernel: audit: type=1130 audit(1707769815.560:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.689271 kernel: audit: type=1131 audit(1707769815.581:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): op(15): [started]  unmounting "/mnt/oem856684514"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): op(15): [finished] unmounting "/mnt/oem856684514"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): [started]  writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): oem config not found in "/usr/share/oem", looking on oem partition
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): op(17): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2488236598"
Feb 12 20:30:15.689558 ignition[830]: CRITICAL : files: createFilesystemsFiles: createFiles: op(16): op(17): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem2488236598": device or resource busy
Feb 12 20:30:15.689558 ignition[830]: ERROR    : files: createFilesystemsFiles: createFiles: op(16): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2488236598", trying btrfs: device or resource busy
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): op(18): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem2488236598"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): op(18): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2488236598"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): op(19): [started]  unmounting "/mnt/oem2488236598"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): op(19): [finished] unmounting "/mnt/oem2488236598"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: op(1a): [started]  processing unit "oem-gce-enable-oslogin.service"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: op(1a): [finished] processing unit "oem-gce-enable-oslogin.service"
Feb 12 20:30:15.689558 ignition[830]: INFO     : files: op(1b): [started]  processing unit "coreos-metadata-sshkeys@.service"
Feb 12 20:30:16.075269 kernel: audit: type=1130 audit(1707769815.696:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.075316 kernel: audit: type=1131 audit(1707769815.696:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.075332 kernel: audit: type=1130 audit(1707769815.857:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.338701 systemd[1]: mnt-oem856684514.mount: Deactivated successfully.
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1b): [finished] processing unit "coreos-metadata-sshkeys@.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1c): [started]  processing unit "oem-gce.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1c): [finished] processing unit "oem-gce.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1d): [started]  processing unit "prepare-cni-plugins.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1d): op(1e): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1d): op(1e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1d): [finished] processing unit "prepare-cni-plugins.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1f): [started]  processing unit "prepare-critools.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1f): op(20): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1f): op(20): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(1f): [finished] processing unit "prepare-critools.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(21): [started]  setting preset to enabled for "oem-gce-enable-oslogin.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(21): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(22): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(22): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(23): [started]  setting preset to enabled for "oem-gce.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(23): [finished] setting preset to enabled for "oem-gce.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(24): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb 12 20:30:16.104140 ignition[830]: INFO     : files: op(25): [started]  setting preset to enabled for "prepare-critools.service"
Feb 12 20:30:16.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.363558 systemd[1]: mnt-oem2488236598.mount: Deactivated successfully.
Feb 12 20:30:16.479334 ignition[830]: INFO     : files: op(25): [finished] setting preset to enabled for "prepare-critools.service"
Feb 12 20:30:16.479334 ignition[830]: INFO     : files: createResultFile: createFiles: op(26): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 12 20:30:16.479334 ignition[830]: INFO     : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 12 20:30:16.479334 ignition[830]: INFO     : files: files passed
Feb 12 20:30:16.479334 ignition[830]: INFO     : Ignition finished successfully
Feb 12 20:30:16.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.582560 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 12 20:30:16.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.382379 systemd[1]: Finished ignition-files.service.
Feb 12 20:30:16.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.402223 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb 12 20:30:16.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.432330 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb 12 20:30:15.433450 systemd[1]: Starting ignition-quench.service...
Feb 12 20:30:16.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.454651 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb 12 20:30:15.499746 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 12 20:30:15.499901 systemd[1]: Finished ignition-quench.service.
Feb 12 20:30:16.728332 ignition[868]: INFO     : Ignition 2.14.0
Feb 12 20:30:16.728332 ignition[868]: INFO     : Stage: umount
Feb 12 20:30:16.728332 ignition[868]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb 12 20:30:16.728332 ignition[868]: DEBUG    : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6
Feb 12 20:30:16.728332 ignition[868]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 12 20:30:16.728332 ignition[868]: INFO     : umount: umount passed
Feb 12 20:30:16.728332 ignition[868]: INFO     : Ignition finished successfully
Feb 12 20:30:16.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.582670 systemd[1]: Reached target ignition-complete.target.
Feb 12 20:30:15.649434 systemd[1]: Starting initrd-parse-etc.service...
Feb 12 20:30:16.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.686837 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 12 20:30:16.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.686962 systemd[1]: Finished initrd-parse-etc.service.
Feb 12 20:30:16.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.889000 audit: BPF prog-id=6 op=UNLOAD
Feb 12 20:30:15.697452 systemd[1]: Reached target initrd-fs.target.
Feb 12 20:30:15.772335 systemd[1]: Reached target initrd.target.
Feb 12 20:30:15.796318 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb 12 20:30:15.797514 systemd[1]: Starting dracut-pre-pivot.service...
Feb 12 20:30:16.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.814559 systemd[1]: Finished dracut-pre-pivot.service.
Feb 12 20:30:16.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.860697 systemd[1]: Starting initrd-cleanup.service...
Feb 12 20:30:16.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.915699 systemd[1]: Stopped target nss-lookup.target.
Feb 12 20:30:15.929530 systemd[1]: Stopped target remote-cryptsetup.target.
Feb 12 20:30:17.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:15.960539 systemd[1]: Stopped target timers.target.
Feb 12 20:30:15.984392 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 12 20:30:15.984588 systemd[1]: Stopped dracut-pre-pivot.service.
Feb 12 20:30:17.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.020564 systemd[1]: Stopped target initrd.target.
Feb 12 20:30:17.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.038468 systemd[1]: Stopped target basic.target.
Feb 12 20:30:17.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.057465 systemd[1]: Stopped target ignition-complete.target.
Feb 12 20:30:16.095488 systemd[1]: Stopped target ignition-diskful.target.
Feb 12 20:30:16.124480 systemd[1]: Stopped target initrd-root-device.target.
Feb 12 20:30:17.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.136477 systemd[1]: Stopped target remote-fs.target.
Feb 12 20:30:17.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.155436 systemd[1]: Stopped target remote-fs-pre.target.
Feb 12 20:30:17.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.174486 systemd[1]: Stopped target sysinit.target.
Feb 12 20:30:17.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.224484 systemd[1]: Stopped target local-fs.target.
Feb 12 20:30:17.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:17.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:16.237520 systemd[1]: Stopped target local-fs-pre.target.
Feb 12 20:30:16.256441 systemd[1]: Stopped target swap.target.
Feb 12 20:30:16.302382 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 12 20:30:16.302579 systemd[1]: Stopped dracut-pre-mount.service.
Feb 12 20:30:17.262147 systemd-journald[189]: Received SIGTERM from PID 1 (systemd).
Feb 12 20:30:16.333561 systemd[1]: Stopped target cryptsetup.target.
Feb 12 20:30:17.270188 iscsid[694]: iscsid shutting down.
Feb 12 20:30:16.347431 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 12 20:30:16.347629 systemd[1]: Stopped dracut-initqueue.service.
Feb 12 20:30:16.366548 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 12 20:30:16.366851 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb 12 20:30:16.405500 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 12 20:30:16.405696 systemd[1]: Stopped ignition-files.service.
Feb 12 20:30:16.439682 systemd[1]: Stopping ignition-mount.service...
Feb 12 20:30:16.472467 systemd[1]: Stopping sysroot-boot.service...
Feb 12 20:30:16.487278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 12 20:30:16.487593 systemd[1]: Stopped systemd-udev-trigger.service.
Feb 12 20:30:16.510472 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 12 20:30:16.510673 systemd[1]: Stopped dracut-pre-trigger.service.
Feb 12 20:30:16.538382 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 12 20:30:16.539275 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 12 20:30:16.539382 systemd[1]: Stopped ignition-mount.service.
Feb 12 20:30:16.558850 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 12 20:30:16.558992 systemd[1]: Stopped sysroot-boot.service.
Feb 12 20:30:16.572751 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 12 20:30:16.572932 systemd[1]: Stopped ignition-disks.service.
Feb 12 20:30:16.590328 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 12 20:30:16.590419 systemd[1]: Stopped ignition-kargs.service.
Feb 12 20:30:16.613341 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 12 20:30:16.613535 systemd[1]: Stopped ignition-fetch.service.
Feb 12 20:30:16.628281 systemd[1]: Stopped target network.target.
Feb 12 20:30:16.646200 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 12 20:30:16.646309 systemd[1]: Stopped ignition-fetch-offline.service.
Feb 12 20:30:16.672263 systemd[1]: Stopped target paths.target.
Feb 12 20:30:16.672323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 12 20:30:16.677145 systemd[1]: Stopped systemd-ask-password-console.path.
Feb 12 20:30:16.693183 systemd[1]: Stopped target slices.target.
Feb 12 20:30:16.707165 systemd[1]: Stopped target sockets.target.
Feb 12 20:30:16.721196 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 12 20:30:16.721252 systemd[1]: Closed iscsid.socket.
Feb 12 20:30:16.735195 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 12 20:30:16.735257 systemd[1]: Closed iscsiuio.socket.
Feb 12 20:30:16.749171 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 12 20:30:16.749266 systemd[1]: Stopped ignition-setup.service.
Feb 12 20:30:16.764229 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 12 20:30:16.764318 systemd[1]: Stopped initrd-setup-root.service.
Feb 12 20:30:16.775596 systemd[1]: Stopping systemd-networkd.service...
Feb 12 20:30:16.779100 systemd-networkd[685]: eth0: DHCPv6 lease lost
Feb 12 20:30:17.277000 audit: BPF prog-id=9 op=UNLOAD
Feb 12 20:30:16.817417 systemd[1]: Stopping systemd-resolved.service...
Feb 12 20:30:16.842245 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 12 20:30:16.842370 systemd[1]: Stopped systemd-resolved.service.
Feb 12 20:30:16.858041 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 12 20:30:16.858168 systemd[1]: Stopped systemd-networkd.service.
Feb 12 20:30:16.874695 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 12 20:30:16.874802 systemd[1]: Finished initrd-cleanup.service.
Feb 12 20:30:16.892354 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 12 20:30:16.892396 systemd[1]: Closed systemd-networkd.socket.
Feb 12 20:30:16.907201 systemd[1]: Stopping network-cleanup.service...
Feb 12 20:30:16.925088 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 12 20:30:16.925306 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb 12 20:30:16.944369 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 20:30:16.944440 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 20:30:16.961402 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 12 20:30:16.961470 systemd[1]: Stopped systemd-modules-load.service.
Feb 12 20:30:16.981493 systemd[1]: Stopping systemd-udevd.service...
Feb 12 20:30:16.996674 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 12 20:30:16.997373 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 12 20:30:16.997520 systemd[1]: Stopped systemd-udevd.service.
Feb 12 20:30:17.012561 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 12 20:30:17.012660 systemd[1]: Closed systemd-udevd-control.socket.
Feb 12 20:30:17.028291 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 12 20:30:17.028343 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb 12 20:30:17.043253 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 12 20:30:17.043327 systemd[1]: Stopped dracut-pre-udev.service.
Feb 12 20:30:17.059346 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 12 20:30:17.059417 systemd[1]: Stopped dracut-cmdline.service.
Feb 12 20:30:17.074343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 12 20:30:17.074414 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb 12 20:30:17.090286 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb 12 20:30:17.113220 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 12 20:30:17.113331 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Feb 12 20:30:17.129393 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 12 20:30:17.129459 systemd[1]: Stopped kmod-static-nodes.service.
Feb 12 20:30:17.147262 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 12 20:30:17.147339 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb 12 20:30:17.164495 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 12 20:30:17.165209 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 12 20:30:17.165325 systemd[1]: Stopped network-cleanup.service.
Feb 12 20:30:17.181511 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 12 20:30:17.181617 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb 12 20:30:17.197378 systemd[1]: Reached target initrd-switch-root.target.
Feb 12 20:30:17.215238 systemd[1]: Starting initrd-switch-root.service...
Feb 12 20:30:17.229934 systemd[1]: Switching root.
Feb 12 20:30:17.281102 systemd-journald[189]: Journal stopped
Feb 12 20:30:21.999380 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb 12 20:30:21.999489 kernel: SELinux:  Class anon_inode not defined in policy.
Feb 12 20:30:21.999529 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb 12 20:30:21.999563 kernel: SELinux:  policy capability network_peer_controls=1
Feb 12 20:30:21.999586 kernel: SELinux:  policy capability open_perms=1
Feb 12 20:30:21.999620 kernel: SELinux:  policy capability extended_socket_class=1
Feb 12 20:30:21.999650 kernel: SELinux:  policy capability always_check_network=0
Feb 12 20:30:21.999678 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 12 20:30:21.999701 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 12 20:30:21.999724 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 12 20:30:21.999746 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 12 20:30:21.999771 systemd[1]: Successfully loaded SELinux policy in 112.738ms.
Feb 12 20:30:21.999810 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.251ms.
Feb 12 20:30:21.999836 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 12 20:30:21.999865 systemd[1]: Detected virtualization kvm.
Feb 12 20:30:21.999888 systemd[1]: Detected architecture x86-64.
Feb 12 20:30:21.999913 systemd[1]: Detected first boot.
Feb 12 20:30:21.999937 systemd[1]: Initializing machine ID from VM UUID.
Feb 12 20:30:21.999963 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb 12 20:30:22.000002 systemd[1]: Populated /etc with preset unit settings.
Feb 12 20:30:22.000028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 20:30:22.000064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 20:30:22.000089 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 20:30:22.000125 kernel: kauditd_printk_skb: 50 callbacks suppressed
Feb 12 20:30:22.000148 kernel: audit: type=1334 audit(1707769821.022:87): prog-id=12 op=LOAD
Feb 12 20:30:22.000169 kernel: audit: type=1334 audit(1707769821.022:88): prog-id=3 op=UNLOAD
Feb 12 20:30:22.000191 kernel: audit: type=1334 audit(1707769821.027:89): prog-id=13 op=LOAD
Feb 12 20:30:22.000212 kernel: audit: type=1334 audit(1707769821.034:90): prog-id=14 op=LOAD
Feb 12 20:30:22.000234 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb 12 20:30:22.000260 kernel: audit: type=1334 audit(1707769821.034:91): prog-id=4 op=UNLOAD
Feb 12 20:30:22.000281 kernel: audit: type=1334 audit(1707769821.034:92): prog-id=5 op=UNLOAD
Feb 12 20:30:22.000304 kernel: audit: type=1131 audit(1707769821.037:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.000327 systemd[1]: Stopped iscsiuio.service.
Feb 12 20:30:22.000350 kernel: audit: type=1334 audit(1707769821.103:94): prog-id=12 op=UNLOAD
Feb 12 20:30:22.000372 kernel: audit: type=1131 audit(1707769821.118:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.000394 systemd[1]: iscsid.service: Deactivated successfully.
Feb 12 20:30:22.000418 systemd[1]: Stopped iscsid.service.
Feb 12 20:30:22.000449 kernel: audit: type=1131 audit(1707769821.159:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.000471 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 12 20:30:22.000495 systemd[1]: Stopped initrd-switch-root.service.
Feb 12 20:30:22.000528 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 12 20:30:22.000552 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb 12 20:30:22.000576 systemd[1]: Created slice system-addon\x2drun.slice.
Feb 12 20:30:22.000601 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Feb 12 20:30:22.000630 systemd[1]: Created slice system-getty.slice.
Feb 12 20:30:22.000653 systemd[1]: Created slice system-modprobe.slice.
Feb 12 20:30:22.000678 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb 12 20:30:22.000707 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb 12 20:30:22.000732 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb 12 20:30:22.000755 systemd[1]: Created slice user.slice.
Feb 12 20:30:22.000779 systemd[1]: Started systemd-ask-password-console.path.
Feb 12 20:30:22.000802 systemd[1]: Started systemd-ask-password-wall.path.
Feb 12 20:30:22.000826 systemd[1]: Set up automount boot.automount.
Feb 12 20:30:22.000853 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb 12 20:30:22.000876 systemd[1]: Stopped target initrd-switch-root.target.
Feb 12 20:30:22.000899 systemd[1]: Stopped target initrd-fs.target.
Feb 12 20:30:22.000921 systemd[1]: Stopped target initrd-root-fs.target.
Feb 12 20:30:22.000945 systemd[1]: Reached target integritysetup.target.
Feb 12 20:30:22.004050 systemd[1]: Reached target remote-cryptsetup.target.
Feb 12 20:30:22.004109 systemd[1]: Reached target remote-fs.target.
Feb 12 20:30:22.004134 systemd[1]: Reached target slices.target.
Feb 12 20:30:22.004158 systemd[1]: Reached target swap.target.
Feb 12 20:30:22.004181 systemd[1]: Reached target torcx.target.
Feb 12 20:30:22.004211 systemd[1]: Reached target veritysetup.target.
Feb 12 20:30:22.004238 systemd[1]: Listening on systemd-coredump.socket.
Feb 12 20:30:22.004262 systemd[1]: Listening on systemd-initctl.socket.
Feb 12 20:30:22.004285 systemd[1]: Listening on systemd-networkd.socket.
Feb 12 20:30:22.004309 systemd[1]: Listening on systemd-udevd-control.socket.
Feb 12 20:30:22.004332 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb 12 20:30:22.004375 systemd[1]: Listening on systemd-userdbd.socket.
Feb 12 20:30:22.004399 systemd[1]: Mounting dev-hugepages.mount...
Feb 12 20:30:22.004423 systemd[1]: Mounting dev-mqueue.mount...
Feb 12 20:30:22.004452 systemd[1]: Mounting media.mount...
Feb 12 20:30:22.004476 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 12 20:30:22.004500 systemd[1]: Mounting sys-kernel-debug.mount...
Feb 12 20:30:22.004523 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb 12 20:30:22.004546 systemd[1]: Mounting tmp.mount...
Feb 12 20:30:22.004570 systemd[1]: Starting flatcar-tmpfiles.service...
Feb 12 20:30:22.004593 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb 12 20:30:22.004616 systemd[1]: Starting kmod-static-nodes.service...
Feb 12 20:30:22.004639 systemd[1]: Starting modprobe@configfs.service...
Feb 12 20:30:22.004666 systemd[1]: Starting modprobe@dm_mod.service...
Feb 12 20:30:22.004689 systemd[1]: Starting modprobe@drm.service...
Feb 12 20:30:22.004712 systemd[1]: Starting modprobe@efi_pstore.service...
Feb 12 20:30:22.004735 systemd[1]: Starting modprobe@fuse.service...
Feb 12 20:30:22.004760 systemd[1]: Starting modprobe@loop.service...
Feb 12 20:30:22.004783 kernel: fuse: init (API version 7.34)
Feb 12 20:30:22.004809 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 12 20:30:22.004832 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 12 20:30:22.004855 kernel: loop: module loaded
Feb 12 20:30:22.004881 systemd[1]: Stopped systemd-fsck-root.service.
Feb 12 20:30:22.004904 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 12 20:30:22.004927 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 12 20:30:22.004950 systemd[1]: Stopped systemd-journald.service.
Feb 12 20:30:22.004990 systemd[1]: Starting systemd-journald.service...
Feb 12 20:30:22.005020 systemd[1]: Starting systemd-modules-load.service...
Feb 12 20:30:22.005043 systemd[1]: Starting systemd-network-generator.service...
Feb 12 20:30:22.005073 systemd[1]: Starting systemd-remount-fs.service...
Feb 12 20:30:22.005103 systemd-journald[992]: Journal started
Feb 12 20:30:22.005200 systemd-journald[992]: Runtime Journal (/run/log/journal/b841b5101d2b7d932c3aaa5b6302eae7) is 8.0M, max 148.8M, 140.8M free.
Feb 12 20:30:17.584000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 12 20:30:17.743000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 20:30:17.743000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb 12 20:30:17.743000 audit: BPF prog-id=10 op=LOAD
Feb 12 20:30:17.743000 audit: BPF prog-id=10 op=UNLOAD
Feb 12 20:30:17.743000 audit: BPF prog-id=11 op=LOAD
Feb 12 20:30:17.743000 audit: BPF prog-id=11 op=UNLOAD
Feb 12 20:30:17.901000 audit[901]: AVC avc:  denied  { associate } for  pid=901 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Feb 12 20:30:17.901000 audit[901]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 20:30:17.901000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb 12 20:30:17.911000 audit[901]: AVC avc:  denied  { associate } for  pid=901 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Feb 12 20:30:17.911000 audit[901]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b5 a2=1ed a3=0 items=2 ppid=884 pid=901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 20:30:17.911000 audit: CWD cwd="/"
Feb 12 20:30:17.911000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:17.911000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:17.911000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb 12 20:30:21.022000 audit: BPF prog-id=12 op=LOAD
Feb 12 20:30:21.022000 audit: BPF prog-id=3 op=UNLOAD
Feb 12 20:30:21.027000 audit: BPF prog-id=13 op=LOAD
Feb 12 20:30:21.034000 audit: BPF prog-id=14 op=LOAD
Feb 12 20:30:21.034000 audit: BPF prog-id=4 op=UNLOAD
Feb 12 20:30:21.034000 audit: BPF prog-id=5 op=UNLOAD
Feb 12 20:30:21.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.103000 audit: BPF prog-id=12 op=UNLOAD
Feb 12 20:30:21.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:21.951000 audit: BPF prog-id=15 op=LOAD
Feb 12 20:30:21.951000 audit: BPF prog-id=16 op=LOAD
Feb 12 20:30:21.951000 audit: BPF prog-id=17 op=LOAD
Feb 12 20:30:21.951000 audit: BPF prog-id=13 op=UNLOAD
Feb 12 20:30:21.951000 audit: BPF prog-id=14 op=UNLOAD
Feb 12 20:30:21.995000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb 12 20:30:21.995000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffeca805830 a2=4000 a3=7ffeca8058cc items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 20:30:21.995000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb 12 20:30:17.897818 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 20:30:21.021165 systemd[1]: Queued start job for default target multi-user.target.
Feb 12 20:30:17.899069 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb 12 20:30:21.039012 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 12 20:30:17.899105 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb 12 20:30:17.899160 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Feb 12 20:30:17.899179 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="skipped missing lower profile" missing profile=oem
Feb 12 20:30:17.899237 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Feb 12 20:30:17.899260 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Feb 12 20:30:17.899594 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Feb 12 20:30:17.899669 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb 12 20:30:17.899692 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb 12 20:30:17.901176 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Feb 12 20:30:17.901246 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Feb 12 20:30:17.901281 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2
Feb 12 20:30:17.901318 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Feb 12 20:30:17.901352 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2
Feb 12 20:30:17.901386 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Feb 12 20:30:20.413159 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 20:30:20.413456 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 20:30:20.413589 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 20:30:20.414265 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb 12 20:30:20.414572 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Feb 12 20:30:20.414745 /usr/lib/systemd/system-generators/torcx-generator[901]: time="2024-02-12T20:30:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Feb 12 20:30:22.024011 systemd[1]: Starting systemd-udev-trigger.service...
Feb 12 20:30:22.043515 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 12 20:30:22.043753 systemd[1]: Stopped verity-setup.service.
Feb 12 20:30:22.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.062990 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 12 20:30:22.072008 systemd[1]: Started systemd-journald.service.
Feb 12 20:30:22.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.081556 systemd[1]: Mounted dev-hugepages.mount.
Feb 12 20:30:22.089315 systemd[1]: Mounted dev-mqueue.mount.
Feb 12 20:30:22.096297 systemd[1]: Mounted media.mount.
Feb 12 20:30:22.104277 systemd[1]: Mounted sys-kernel-debug.mount.
Feb 12 20:30:22.114288 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb 12 20:30:22.123252 systemd[1]: Mounted tmp.mount.
Feb 12 20:30:22.131454 systemd[1]: Finished flatcar-tmpfiles.service.
Feb 12 20:30:22.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.140522 systemd[1]: Finished kmod-static-nodes.service.
Feb 12 20:30:22.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.150506 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 12 20:30:22.150719 systemd[1]: Finished modprobe@configfs.service.
Feb 12 20:30:22.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.159517 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 12 20:30:22.159753 systemd[1]: Finished modprobe@dm_mod.service.
Feb 12 20:30:22.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.169496 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 12 20:30:22.169721 systemd[1]: Finished modprobe@drm.service.
Feb 12 20:30:22.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.179498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 12 20:30:22.179728 systemd[1]: Finished modprobe@efi_pstore.service.
Feb 12 20:30:22.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.188486 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 12 20:30:22.188714 systemd[1]: Finished modprobe@fuse.service.
Feb 12 20:30:22.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.197475 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 12 20:30:22.197699 systemd[1]: Finished modprobe@loop.service.
Feb 12 20:30:22.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.206514 systemd[1]: Finished systemd-modules-load.service.
Feb 12 20:30:22.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.215482 systemd[1]: Finished systemd-network-generator.service.
Feb 12 20:30:22.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.224523 systemd[1]: Finished systemd-remount-fs.service.
Feb 12 20:30:22.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.234503 systemd[1]: Finished systemd-udev-trigger.service.
Feb 12 20:30:22.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.243846 systemd[1]: Reached target network-pre.target.
Feb 12 20:30:22.254624 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb 12 20:30:22.264485 systemd[1]: Mounting sys-kernel-config.mount...
Feb 12 20:30:22.272111 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 12 20:30:22.274842 systemd[1]: Starting systemd-hwdb-update.service...
Feb 12 20:30:22.283831 systemd[1]: Starting systemd-journal-flush.service...
Feb 12 20:30:22.292177 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 12 20:30:22.293938 systemd[1]: Starting systemd-random-seed.service...
Feb 12 20:30:22.301187 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb 12 20:30:22.303629 systemd[1]: Starting systemd-sysctl.service...
Feb 12 20:30:22.307416 systemd-journald[992]: Time spent on flushing to /var/log/journal/b841b5101d2b7d932c3aaa5b6302eae7 is 60.817ms for 1157 entries.
Feb 12 20:30:22.307416 systemd-journald[992]: System Journal (/var/log/journal/b841b5101d2b7d932c3aaa5b6302eae7) is 8.0M, max 584.8M, 576.8M free.
Feb 12 20:30:22.419157 systemd-journald[992]: Received client request to flush runtime journal.
Feb 12 20:30:22.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.321027 systemd[1]: Starting systemd-sysusers.service...
Feb 12 20:30:22.329947 systemd[1]: Starting systemd-udev-settle.service...
Feb 12 20:30:22.340633 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb 12 20:30:22.420389 udevadm[1006]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 12 20:30:22.351322 systemd[1]: Mounted sys-kernel-config.mount.
Feb 12 20:30:22.360518 systemd[1]: Finished systemd-random-seed.service.
Feb 12 20:30:22.369551 systemd[1]: Finished systemd-sysctl.service.
Feb 12 20:30:22.382054 systemd[1]: Reached target first-boot-complete.target.
Feb 12 20:30:22.395146 systemd[1]: Finished systemd-sysusers.service.
Feb 12 20:30:22.405027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb 12 20:30:22.420680 systemd[1]: Finished systemd-journal-flush.service.
Feb 12 20:30:22.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:22.472015 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb 12 20:30:22.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.005629 systemd[1]: Finished systemd-hwdb-update.service.
Feb 12 20:30:23.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.012000 audit: BPF prog-id=18 op=LOAD
Feb 12 20:30:23.013000 audit: BPF prog-id=19 op=LOAD
Feb 12 20:30:23.013000 audit: BPF prog-id=7 op=UNLOAD
Feb 12 20:30:23.013000 audit: BPF prog-id=8 op=UNLOAD
Feb 12 20:30:23.016027 systemd[1]: Starting systemd-udevd.service...
Feb 12 20:30:23.039061 systemd-udevd[1011]: Using default interface naming scheme 'v252'.
Feb 12 20:30:23.086001 systemd[1]: Started systemd-udevd.service.
Feb 12 20:30:23.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.094000 audit: BPF prog-id=20 op=LOAD
Feb 12 20:30:23.098002 systemd[1]: Starting systemd-networkd.service...
Feb 12 20:30:23.109000 audit: BPF prog-id=21 op=LOAD
Feb 12 20:30:23.109000 audit: BPF prog-id=22 op=LOAD
Feb 12 20:30:23.109000 audit: BPF prog-id=23 op=LOAD
Feb 12 20:30:23.112783 systemd[1]: Starting systemd-userdbd.service...
Feb 12 20:30:23.166541 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Feb 12 20:30:23.186631 systemd[1]: Started systemd-userdbd.service.
Feb 12 20:30:23.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.269023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Feb 12 20:30:23.337000 systemd-networkd[1025]: lo: Link UP
Feb 12 20:30:23.337022 systemd-networkd[1025]: lo: Gained carrier
Feb 12 20:30:23.337740 systemd-networkd[1025]: Enumeration completed
Feb 12 20:30:23.337887 systemd[1]: Started systemd-networkd.service.
Feb 12 20:30:23.338789 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 12 20:30:23.341059 systemd-networkd[1025]: eth0: Link UP
Feb 12 20:30:23.341071 systemd-networkd[1025]: eth0: Gained carrier
Feb 12 20:30:23.346040 kernel: ACPI: button: Power Button [PWRF]
Feb 12 20:30:23.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.357172 systemd-networkd[1025]: eth0: DHCPv4 address 10.128.0.56/32, gateway 10.128.0.1 acquired from 169.254.169.254
Feb 12 20:30:23.334000 audit[1019]: AVC avc:  denied  { confidentiality } for  pid=1019 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb 12 20:30:23.334000 audit[1019]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559aaecc8470 a1=32194 a2=7f5a9ae18bc5 a3=5 items=108 ppid=1011 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 20:30:23.334000 audit: CWD cwd="/"
Feb 12 20:30:23.334000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=1 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=2 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=3 name=(null) inode=14055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=4 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=5 name=(null) inode=14056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=6 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=7 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=8 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=9 name=(null) inode=14058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=10 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=11 name=(null) inode=14059 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=12 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=13 name=(null) inode=14060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=14 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=15 name=(null) inode=14061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=16 name=(null) inode=14057 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=17 name=(null) inode=14062 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=18 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=19 name=(null) inode=14063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=20 name=(null) inode=14063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=21 name=(null) inode=14064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=22 name=(null) inode=14063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=23 name=(null) inode=14065 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=24 name=(null) inode=14063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=25 name=(null) inode=14066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=26 name=(null) inode=14063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=27 name=(null) inode=14067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.390062 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
Feb 12 20:30:23.334000 audit: PATH item=28 name=(null) inode=14063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=29 name=(null) inode=14068 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=30 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=31 name=(null) inode=14069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=32 name=(null) inode=14069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=33 name=(null) inode=14070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=34 name=(null) inode=14069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=35 name=(null) inode=14071 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=36 name=(null) inode=14069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=37 name=(null) inode=14072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=38 name=(null) inode=14069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=39 name=(null) inode=14073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=40 name=(null) inode=14069 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=41 name=(null) inode=14074 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=42 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=43 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=44 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=45 name=(null) inode=14076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=46 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=47 name=(null) inode=14077 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=48 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=49 name=(null) inode=14078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=50 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=51 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=52 name=(null) inode=14075 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=53 name=(null) inode=14080 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=55 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=56 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=57 name=(null) inode=14082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=58 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=59 name=(null) inode=14083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=60 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=61 name=(null) inode=14084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=62 name=(null) inode=14084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=63 name=(null) inode=14085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=64 name=(null) inode=14084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=65 name=(null) inode=14086 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=66 name=(null) inode=14084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=67 name=(null) inode=14087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=68 name=(null) inode=14084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=69 name=(null) inode=14088 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=70 name=(null) inode=14084 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=71 name=(null) inode=14089 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=72 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=73 name=(null) inode=14090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=74 name=(null) inode=14090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=75 name=(null) inode=14091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=76 name=(null) inode=14090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.395029 kernel: EDAC MC: Ver: 3.0.0
Feb 12 20:30:23.334000 audit: PATH item=77 name=(null) inode=14092 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=78 name=(null) inode=14090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=79 name=(null) inode=14093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=80 name=(null) inode=14090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=81 name=(null) inode=14094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=82 name=(null) inode=14090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=83 name=(null) inode=14095 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=84 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=85 name=(null) inode=14096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=86 name=(null) inode=14096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=87 name=(null) inode=14097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=88 name=(null) inode=14096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=89 name=(null) inode=14098 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=90 name=(null) inode=14096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=91 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=92 name=(null) inode=14096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=93 name=(null) inode=14100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=94 name=(null) inode=14096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=95 name=(null) inode=14101 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=96 name=(null) inode=14081 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=97 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=98 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=99 name=(null) inode=14103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=100 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=101 name=(null) inode=14104 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=102 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=103 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=104 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=105 name=(null) inode=14106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PATH item=106 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.407052 kernel: ACPI: button: Sleep Button [SLPF]
Feb 12 20:30:23.334000 audit: PATH item=107 name=(null) inode=14107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb 12 20:30:23.334000 audit: PROCTITLE proctitle="(udev-worker)"
Feb 12 20:30:23.460498 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
Feb 12 20:30:23.460959 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1033)
Feb 12 20:30:23.486019 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
Feb 12 20:30:23.496866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb 12 20:30:23.508022 kernel: mousedev: PS/2 mouse device common for all mice
Feb 12 20:30:23.521549 systemd[1]: Finished systemd-udev-settle.service.
Feb 12 20:30:23.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.531965 systemd[1]: Starting lvm2-activation-early.service...
Feb 12 20:30:23.566953 lvm[1048]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 20:30:23.600517 systemd[1]: Finished lvm2-activation-early.service.
Feb 12 20:30:23.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.609371 systemd[1]: Reached target cryptsetup.target.
Feb 12 20:30:23.620720 systemd[1]: Starting lvm2-activation.service...
Feb 12 20:30:23.627082 lvm[1049]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 12 20:30:23.655945 systemd[1]: Finished lvm2-activation.service.
Feb 12 20:30:23.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.664348 systemd[1]: Reached target local-fs-pre.target.
Feb 12 20:30:23.673171 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 12 20:30:23.673224 systemd[1]: Reached target local-fs.target.
Feb 12 20:30:23.682138 systemd[1]: Reached target machines.target.
Feb 12 20:30:23.692801 systemd[1]: Starting ldconfig.service...
Feb 12 20:30:23.701511 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb 12 20:30:23.701628 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 20:30:23.703412 systemd[1]: Starting systemd-boot-update.service...
Feb 12 20:30:23.711829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb 12 20:30:23.722453 systemd[1]: Starting systemd-machine-id-commit.service...
Feb 12 20:30:23.732347 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb 12 20:30:23.732440 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb 12 20:30:23.734177 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb 12 20:30:23.739954 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1051 (bootctl)
Feb 12 20:30:23.742776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb 12 20:30:23.754166 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb 12 20:30:23.762073 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb 12 20:30:23.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.764398 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 12 20:30:23.769458 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 12 20:30:23.900732 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31)
Feb 12 20:30:23.900732 systemd-fsck[1060]: /dev/sda1: 789 files, 115339/258078 clusters
Feb 12 20:30:23.903699 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb 12 20:30:23.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:23.915243 systemd[1]: Mounting boot.mount...
Feb 12 20:30:23.930615 systemd[1]: Mounted boot.mount.
Feb 12 20:30:23.961230 systemd[1]: Finished systemd-boot-update.service.
Feb 12 20:30:23.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.053579 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb 12 20:30:24.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.064420 systemd[1]: Starting audit-rules.service...
Feb 12 20:30:24.073526 systemd[1]: Starting clean-ca-certificates.service...
Feb 12 20:30:24.084071 systemd[1]: Starting oem-gce-enable-oslogin.service...
Feb 12 20:30:24.095134 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb 12 20:30:24.105881 systemd[1]: Starting systemd-resolved.service...
Feb 12 20:30:24.102000 audit: BPF prog-id=24 op=LOAD
Feb 12 20:30:24.112000 audit: BPF prog-id=25 op=LOAD
Feb 12 20:30:24.115957 systemd[1]: Starting systemd-timesyncd.service...
Feb 12 20:30:24.124997 systemd[1]: Starting systemd-update-utmp.service...
Feb 12 20:30:24.133144 systemd[1]: Finished clean-ca-certificates.service.
Feb 12 20:30:24.134000 audit[1082]: SYSTEM_BOOT pid=1082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.145833 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 12 20:30:24.164417 systemd[1]: Finished systemd-update-utmp.service.
Feb 12 20:30:24.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.238901 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully.
Feb 12 20:30:24.239201 systemd[1]: Finished oem-gce-enable-oslogin.service.
Feb 12 20:30:24.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.248718 systemd[1]: Started systemd-timesyncd.service.
Feb 12 20:30:24.249163 systemd-timesyncd[1079]: Contacted time server 169.254.169.254:123 (169.254.169.254).
Feb 12 20:30:24.249234 systemd-timesyncd[1079]: Initial clock synchronization to Mon 2024-02-12 20:30:24.129276 UTC.
Feb 12 20:30:24.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.257366 systemd[1]: Reached target time-set.target.
Feb 12 20:30:24.266633 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb 12 20:30:24.275932 systemd-resolved[1075]: Positive Trust Anchors:
Feb 12 20:30:24.275947 systemd-resolved[1075]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 12 20:30:24.276014 systemd-resolved[1075]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb 12 20:30:24.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 12 20:30:24.275000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb 12 20:30:24.275000 audit[1098]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc9cfa3890 a2=420 a3=0 items=0 ppid=1063 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb 12 20:30:24.275000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb 12 20:30:24.278573 augenrules[1098]: No rules
Feb 12 20:30:24.279688 systemd[1]: Finished audit-rules.service.
Feb 12 20:30:24.314087 systemd-resolved[1075]: Defaulting to hostname 'linux'.
Feb 12 20:30:24.316568 systemd[1]: Started systemd-resolved.service.
Feb 12 20:30:24.325227 systemd[1]: Reached target network.target.
Feb 12 20:30:24.333117 systemd[1]: Reached target nss-lookup.target.
Feb 12 20:30:24.510304 ldconfig[1050]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 12 20:30:24.586582 systemd[1]: Finished ldconfig.service.
Feb 12 20:30:24.595884 systemd[1]: Starting systemd-update-done.service...
Feb 12 20:30:24.605255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 12 20:30:24.606374 systemd[1]: Finished systemd-machine-id-commit.service.
Feb 12 20:30:24.615559 systemd[1]: Finished systemd-update-done.service.
Feb 12 20:30:24.624490 systemd[1]: Reached target sysinit.target.
Feb 12 20:30:24.633252 systemd[1]: Started motdgen.path.
Feb 12 20:30:24.640204 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb 12 20:30:24.650352 systemd[1]: Started logrotate.timer.
Feb 12 20:30:24.657310 systemd[1]: Started mdadm.timer.
Feb 12 20:30:24.664128 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb 12 20:30:24.672117 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 12 20:30:24.672179 systemd[1]: Reached target paths.target.
Feb 12 20:30:24.679133 systemd[1]: Reached target timers.target.
Feb 12 20:30:24.686537 systemd[1]: Listening on dbus.socket.
Feb 12 20:30:24.694534 systemd[1]: Starting docker.socket...
Feb 12 20:30:24.705053 systemd[1]: Listening on sshd.socket.
Feb 12 20:30:24.712251 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 20:30:24.713002 systemd[1]: Listening on docker.socket.
Feb 12 20:30:24.720277 systemd[1]: Reached target sockets.target.
Feb 12 20:30:24.729106 systemd[1]: Reached target basic.target.
Feb 12 20:30:24.736168 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 20:30:24.736215 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb 12 20:30:24.737775 systemd[1]: Starting containerd.service...
Feb 12 20:30:24.746543 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Feb 12 20:30:24.758604 systemd[1]: Starting dbus.service...
Feb 12 20:30:24.765932 systemd[1]: Starting enable-oem-cloudinit.service...
Feb 12 20:30:24.774874 systemd[1]: Starting extend-filesystems.service...
Feb 12 20:30:24.781650 jq[1110]: false
Feb 12 20:30:24.782149 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb 12 20:30:24.784181 systemd[1]: Starting motdgen.service...
Feb 12 20:30:24.792547 systemd[1]: Starting oem-gce.service...
Feb 12 20:30:24.799463 systemd[1]: Starting prepare-cni-plugins.service...
Feb 12 20:30:24.809010 systemd[1]: Starting prepare-critools.service...
Feb 12 20:30:24.820695 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb 12 20:30:24.831116 systemd[1]: Starting sshd-keygen.service...
Feb 12 20:30:24.842342 systemd[1]: Starting systemd-logind.service...
Feb 12 20:30:24.849697 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 12 20:30:24.849824 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2).
Feb 12 20:30:24.850756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 12 20:30:24.851467 extend-filesystems[1111]: Found sda
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda1
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda2
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda3
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found usr
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda4
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda6
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda7
Feb 12 20:30:24.867221 extend-filesystems[1111]: Found sda9
Feb 12 20:30:24.867221 extend-filesystems[1111]: Checking size of /dev/sda9
Feb 12 20:30:24.990460 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks
Feb 12 20:30:24.852039 systemd[1]: Starting update-engine.service...
Feb 12 20:30:24.990805 extend-filesystems[1111]: Resized partition /dev/sda9
Feb 12 20:30:24.862241 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb 12 20:30:25.020611 update_engine[1131]: I0212 20:30:25.009775  1131 main.cc:92] Flatcar Update Engine starting
Feb 12 20:30:25.021077 extend-filesystems[1140]: resize2fs 1.46.5 (30-Dec-2021)
Feb 12 20:30:25.070919 kernel: EXT4-fs (sda9): resized filesystem to 2538491
Feb 12 20:30:25.020054 dbus-daemon[1109]: [system] SELinux support is enabled
Feb 12 20:30:24.881496 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 12 20:30:25.075310 jq[1134]: true
Feb 12 20:30:25.075594 update_engine[1131]: I0212 20:30:25.050374  1131 update_check_scheduler.cc:74] Next update check in 8m55s
Feb 12 20:30:25.044156 dbus-daemon[1109]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1025 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 12 20:30:24.881810 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb 12 20:30:25.064437 dbus-daemon[1109]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 12 20:30:24.882503 systemd[1]: motdgen.service: Deactivated successfully.
Feb 12 20:30:25.083629 extend-filesystems[1140]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required
Feb 12 20:30:25.083629 extend-filesystems[1140]: old_desc_blocks = 1, new_desc_blocks = 2
Feb 12 20:30:25.083629 extend-filesystems[1140]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long.
Feb 12 20:30:24.882746 systemd[1]: Finished motdgen.service.
Feb 12 20:30:25.151941 tar[1142]: ./
Feb 12 20:30:25.151941 tar[1142]: ./loopback
Feb 12 20:30:25.152433 kernel: loop0: detected capacity change from 0 to 2097152
Feb 12 20:30:25.152734 extend-filesystems[1111]: Resized filesystem in /dev/sda9
Feb 12 20:30:24.912785 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 12 20:30:25.167425 tar[1143]: crictl
Feb 12 20:30:24.915426 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb 12 20:30:25.020305 systemd[1]: Started dbus.service.
Feb 12 20:30:25.168279 mkfs.ext4[1149]: mke2fs 1.46.5 (30-Dec-2021)
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Discarding device blocks:      0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008             \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Creating filesystem with 262144 4k blocks and 65536 inodes
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Filesystem UUID: c5965bd4-4435-46c0-889a-48400135ccf9
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Superblock backups stored on blocks:
Feb 12 20:30:25.168279 mkfs.ext4[1149]:         32768, 98304, 163840, 229376
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Allocating group tables: 0/8\u0008\u0008\u0008   \u0008\u0008\u0008done
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Writing inode tables: 0/8\u0008\u0008\u0008   \u0008\u0008\u0008done
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Creating journal (8192 blocks): done
Feb 12 20:30:25.168279 mkfs.ext4[1149]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008   \u0008\u0008\u0008done
Feb 12 20:30:25.039644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 12 20:30:25.168984 jq[1147]: true
Feb 12 20:30:25.039717 systemd[1]: Reached target system-config.target.
Feb 12 20:30:25.051198 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 12 20:30:25.169630 bash[1176]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 20:30:25.051236 systemd[1]: Reached target user-config.target.
Feb 12 20:30:25.063730 systemd[1]: Started update-engine.service.
Feb 12 20:30:25.170125 umount[1166]: umount: /var/lib/flatcar-oem-gce.img: not mounted.
Feb 12 20:30:25.082210 systemd[1]: Started locksmithd.service.
Feb 12 20:30:25.092160 systemd[1]: Starting systemd-hostnamed.service...
Feb 12 20:30:25.099727 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 12 20:30:25.100008 systemd[1]: Finished extend-filesystems.service.
Feb 12 20:30:25.153456 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb 12 20:30:25.185096 tar[1142]: ./bandwidth
Feb 12 20:30:25.206736 systemd-logind[1129]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 12 20:30:25.206778 systemd-logind[1129]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 12 20:30:25.206808 systemd-logind[1129]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 12 20:30:25.211658 systemd-logind[1129]: New seat seat0.
Feb 12 20:30:25.218379 systemd[1]: Started systemd-logind.service.
Feb 12 20:30:25.232135 systemd-networkd[1025]: eth0: Gained IPv6LL
Feb 12 20:30:25.246027 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb 12 20:30:25.247334 env[1148]: time="2024-02-12T20:30:25.247263587Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb 12 20:30:25.413206 coreos-metadata[1108]: Feb 12 20:30:25.413 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1
Feb 12 20:30:25.427300 coreos-metadata[1108]: Feb 12 20:30:25.427 INFO Fetch failed with 404: resource not found
Feb 12 20:30:25.427473 coreos-metadata[1108]: Feb 12 20:30:25.427 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1
Feb 12 20:30:25.435000 coreos-metadata[1108]: Feb 12 20:30:25.433 INFO Fetch successful
Feb 12 20:30:25.435000 coreos-metadata[1108]: Feb 12 20:30:25.433 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1
Feb 12 20:30:25.457619 coreos-metadata[1108]: Feb 12 20:30:25.457 INFO Fetch failed with 404: resource not found
Feb 12 20:30:25.457794 coreos-metadata[1108]: Feb 12 20:30:25.457 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1
Feb 12 20:30:25.458422 coreos-metadata[1108]: Feb 12 20:30:25.458 INFO Fetch failed with 404: resource not found
Feb 12 20:30:25.458551 coreos-metadata[1108]: Feb 12 20:30:25.458 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1
Feb 12 20:30:25.475746 coreos-metadata[1108]: Feb 12 20:30:25.468 INFO Fetch successful
Feb 12 20:30:25.472875 systemd[1]: Started systemd-hostnamed.service.
Feb 12 20:30:25.472695 dbus-daemon[1109]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 12 20:30:25.477359 unknown[1108]: wrote ssh authorized keys file for user: core
Feb 12 20:30:25.478910 dbus-daemon[1109]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1168 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 12 20:30:25.488316 systemd[1]: Starting polkit.service...
Feb 12 20:30:25.524021 env[1148]: time="2024-02-12T20:30:25.523904554Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 12 20:30:25.524369 env[1148]: time="2024-02-12T20:30:25.524338854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 12 20:30:25.541085 env[1148]: time="2024-02-12T20:30:25.541010139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 12 20:30:25.541236 env[1148]: time="2024-02-12T20:30:25.541082526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 12 20:30:25.550260 update-ssh-keys[1185]: Updated "/home/core/.ssh/authorized_keys"
Feb 12 20:30:25.549595 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.551699976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.551751150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.551778178Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.551796939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.551944551Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.552338752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.552572883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.552600139Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.552685153Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb 12 20:30:25.552898 env[1148]: time="2024-02-12T20:30:25.552706773Z" level=info msg="metadata content store policy set" policy=shared
Feb 12 20:30:25.559868 env[1148]: time="2024-02-12T20:30:25.559803722Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 12 20:30:25.559868 env[1148]: time="2024-02-12T20:30:25.559851241Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 12 20:30:25.560084 env[1148]: time="2024-02-12T20:30:25.559874087Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 12 20:30:25.560084 env[1148]: time="2024-02-12T20:30:25.559937519Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560084 env[1148]: time="2024-02-12T20:30:25.560047799Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560084 env[1148]: time="2024-02-12T20:30:25.560077698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560297 env[1148]: time="2024-02-12T20:30:25.560103251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560297 env[1148]: time="2024-02-12T20:30:25.560129041Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560297 env[1148]: time="2024-02-12T20:30:25.560155356Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560297 env[1148]: time="2024-02-12T20:30:25.560181002Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560297 env[1148]: time="2024-02-12T20:30:25.560206549Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.560297 env[1148]: time="2024-02-12T20:30:25.560268434Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 12 20:30:25.560591 env[1148]: time="2024-02-12T20:30:25.560428536Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 12 20:30:25.560591 env[1148]: time="2024-02-12T20:30:25.560561897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561044129Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561105593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561132025Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561232495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561261850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561296488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561321226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561344492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561378832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561399755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561422299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561447531Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561643891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561669412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562018 env[1148]: time="2024-02-12T20:30:25.561711544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.562758 env[1148]: time="2024-02-12T20:30:25.561735914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 12 20:30:25.562758 env[1148]: time="2024-02-12T20:30:25.561763584Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb 12 20:30:25.562758 env[1148]: time="2024-02-12T20:30:25.561787561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 12 20:30:25.562758 env[1148]: time="2024-02-12T20:30:25.561819907Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb 12 20:30:25.562758 env[1148]: time="2024-02-12T20:30:25.561877062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 12 20:30:25.564162 env[1148]: time="2024-02-12T20:30:25.563354816Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 12 20:30:25.564162 env[1148]: time="2024-02-12T20:30:25.563457673Z" level=info msg="Connect containerd service"
Feb 12 20:30:25.564162 env[1148]: time="2024-02-12T20:30:25.563517533Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.564580860Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.564926819Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.565031834Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.565341607Z" level=info msg="containerd successfully booted in 0.351910s"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.566857717Z" level=info msg="Start subscribing containerd event"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.566943314Z" level=info msg="Start recovering state"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.567065680Z" level=info msg="Start event monitor"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.567083689Z" level=info msg="Start snapshots syncer"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.567102394Z" level=info msg="Start cni network conf syncer for default"
Feb 12 20:30:25.568864 env[1148]: time="2024-02-12T20:30:25.567116594Z" level=info msg="Start streaming server"
Feb 12 20:30:25.565209 systemd[1]: Started containerd.service.
Feb 12 20:30:25.569515 tar[1142]: ./ptp
Feb 12 20:30:25.618504 polkitd[1184]: Started polkitd version 121
Feb 12 20:30:25.643751 polkitd[1184]: Loading rules from directory /etc/polkit-1/rules.d
Feb 12 20:30:25.643847 polkitd[1184]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 12 20:30:25.649706 polkitd[1184]: Finished loading, compiling and executing 2 rules
Feb 12 20:30:25.653673 dbus-daemon[1109]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 12 20:30:25.653909 systemd[1]: Started polkit.service.
Feb 12 20:30:25.654370 polkitd[1184]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 12 20:30:25.711470 systemd-hostnamed[1168]: Hostname set to <ci-3510-3-2-67552dab4a90677cbc7a.c.flatcar-212911.internal> (transient)
Feb 12 20:30:25.714482 systemd-resolved[1075]: System hostname changed to 'ci-3510-3-2-67552dab4a90677cbc7a.c.flatcar-212911.internal'.
Feb 12 20:30:25.782280 tar[1142]: ./vlan
Feb 12 20:30:25.931346 tar[1142]: ./host-device
Feb 12 20:30:26.050322 tar[1142]: ./tuning
Feb 12 20:30:26.152259 tar[1142]: ./vrf
Feb 12 20:30:26.259824 tar[1142]: ./sbr
Feb 12 20:30:26.367012 tar[1142]: ./tap
Feb 12 20:30:26.455537 systemd[1]: Finished prepare-critools.service.
Feb 12 20:30:26.493071 tar[1142]: ./dhcp
Feb 12 20:30:26.756152 tar[1142]: ./static
Feb 12 20:30:26.803250 tar[1142]: ./firewall
Feb 12 20:30:26.867123 tar[1142]: ./macvlan
Feb 12 20:30:26.948502 tar[1142]: ./dummy
Feb 12 20:30:27.063660 tar[1142]: ./bridge
Feb 12 20:30:27.171528 tar[1142]: ./ipvlan
Feb 12 20:30:27.278413 tar[1142]: ./portmap
Feb 12 20:30:27.374093 tar[1142]: ./host-local
Feb 12 20:30:27.487855 systemd[1]: Finished prepare-cni-plugins.service.
Feb 12 20:30:28.691953 sshd_keygen[1137]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 12 20:30:28.741192 systemd[1]: Finished sshd-keygen.service.
Feb 12 20:30:28.742130 locksmithd[1167]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 12 20:30:28.751892 systemd[1]: Starting issuegen.service...
Feb 12 20:30:28.769102 systemd[1]: issuegen.service: Deactivated successfully.
Feb 12 20:30:28.769365 systemd[1]: Finished issuegen.service.
Feb 12 20:30:28.778779 systemd[1]: Starting systemd-user-sessions.service...
Feb 12 20:30:28.793520 systemd[1]: Finished systemd-user-sessions.service.
Feb 12 20:30:28.804912 systemd[1]: Started getty@tty1.service.
Feb 12 20:30:28.814382 systemd[1]: Started serial-getty@ttyS0.service.
Feb 12 20:30:28.823513 systemd[1]: Reached target getty.target.
Feb 12 20:30:30.718458 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully.
Feb 12 20:30:32.726015 kernel: loop0: detected capacity change from 0 to 2097152
Feb 12 20:30:32.750992 systemd-nspawn[1217]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img.
Feb 12 20:30:32.750992 systemd-nspawn[1217]: Press ^] three times within 1s to kill container.
Feb 12 20:30:32.771041 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb 12 20:30:32.852469 systemd[1]: Started oem-gce.service.
Feb 12 20:30:32.860634 systemd[1]: Reached target multi-user.target.
Feb 12 20:30:32.871535 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb 12 20:30:32.884373 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 12 20:30:32.884638 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb 12 20:30:32.894330 systemd[1]: Startup finished in 1.076s (kernel) + 8.630s (initrd) + 15.437s (userspace) = 25.144s.
Feb 12 20:30:32.949406 systemd-nspawn[1217]: + '[' -e /etc/default/instance_configs.cfg.template ']'
Feb 12 20:30:32.949406 systemd-nspawn[1217]: + echo -e '[InstanceSetup]\nset_host_keys = false'
Feb 12 20:30:32.949673 systemd-nspawn[1217]: + /usr/bin/google_instance_setup
Feb 12 20:30:33.628041 instance-setup[1223]: INFO Running google_set_multiqueue.
Feb 12 20:30:33.648959 instance-setup[1223]: INFO Set channels for eth0 to 2.
Feb 12 20:30:33.652753 instance-setup[1223]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1.
Feb 12 20:30:33.654110 instance-setup[1223]: INFO /proc/irq/31/smp_affinity_list: real affinity 0
Feb 12 20:30:33.654564 instance-setup[1223]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1.
Feb 12 20:30:33.656015 instance-setup[1223]: INFO /proc/irq/32/smp_affinity_list: real affinity 0
Feb 12 20:30:33.656350 instance-setup[1223]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1.
Feb 12 20:30:33.657683 instance-setup[1223]: INFO /proc/irq/33/smp_affinity_list: real affinity 1
Feb 12 20:30:33.658122 instance-setup[1223]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1.
Feb 12 20:30:33.659542 instance-setup[1223]: INFO /proc/irq/34/smp_affinity_list: real affinity 1
Feb 12 20:30:33.671521 instance-setup[1223]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus
Feb 12 20:30:33.671873 instance-setup[1223]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus
Feb 12 20:30:33.712292 systemd-nspawn[1217]: + /usr/bin/google_metadata_script_runner --script-type startup
Feb 12 20:30:33.857741 systemd[1]: Created slice system-sshd.slice.
Feb 12 20:30:33.859741 systemd[1]: Started sshd@0-10.128.0.56:22-147.75.109.163:39414.service.
Feb 12 20:30:34.056785 startup-script[1254]: INFO Starting startup scripts.
Feb 12 20:30:34.069431 startup-script[1254]: INFO No startup scripts found in metadata.
Feb 12 20:30:34.069588 startup-script[1254]: INFO Finished running startup scripts.
Feb 12 20:30:34.111642 systemd-nspawn[1217]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM
Feb 12 20:30:34.111642 systemd-nspawn[1217]: + daemon_pids=()
Feb 12 20:30:34.112218 systemd-nspawn[1217]: + for d in accounts clock_skew network
Feb 12 20:30:34.112406 systemd-nspawn[1217]: + daemon_pids+=($!)
Feb 12 20:30:34.112529 systemd-nspawn[1217]: + for d in accounts clock_skew network
Feb 12 20:30:34.112771 systemd-nspawn[1217]: + daemon_pids+=($!)
Feb 12 20:30:34.112872 systemd-nspawn[1217]: + for d in accounts clock_skew network
Feb 12 20:30:34.113246 systemd-nspawn[1217]: + daemon_pids+=($!)
Feb 12 20:30:34.113348 systemd-nspawn[1217]: + NOTIFY_SOCKET=/run/systemd/notify
Feb 12 20:30:34.113348 systemd-nspawn[1217]: + /usr/bin/systemd-notify --ready
Feb 12 20:30:34.113865 systemd-nspawn[1217]: + /usr/bin/google_network_daemon
Feb 12 20:30:34.113948 systemd-nspawn[1217]: + /usr/bin/google_clock_skew_daemon
Feb 12 20:30:34.114500 systemd-nspawn[1217]: + /usr/bin/google_accounts_daemon
Feb 12 20:30:34.184259 systemd-nspawn[1217]: + wait -n 36 37 38
Feb 12 20:30:34.203713 sshd[1256]: Accepted publickey for core from 147.75.109.163 port 39414 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc
Feb 12 20:30:34.207138 sshd[1256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 20:30:34.227642 systemd[1]: Created slice user-500.slice.
Feb 12 20:30:34.230487 systemd[1]: Starting user-runtime-dir@500.service...
Feb 12 20:30:34.238025 systemd-logind[1129]: New session 1 of user core.
Feb 12 20:30:34.248183 systemd[1]: Finished user-runtime-dir@500.service.
Feb 12 20:30:34.250651 systemd[1]: Starting user@500.service...
Feb 12 20:30:34.295337 (systemd)[1265]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 12 20:30:34.501931 systemd[1265]: Queued start job for default target default.target.
Feb 12 20:30:34.502776 systemd[1265]: Reached target paths.target.
Feb 12 20:30:34.502811 systemd[1265]: Reached target sockets.target.
Feb 12 20:30:34.502835 systemd[1265]: Reached target timers.target.
Feb 12 20:30:34.502858 systemd[1265]: Reached target basic.target.
Feb 12 20:30:34.502932 systemd[1265]: Reached target default.target.
Feb 12 20:30:34.503002 systemd[1265]: Startup finished in 192ms.
Feb 12 20:30:34.503133 systemd[1]: Started user@500.service.
Feb 12 20:30:34.504745 systemd[1]: Started session-1.scope.
Feb 12 20:30:34.732570 systemd[1]: Started sshd@1-10.128.0.56:22-147.75.109.163:42018.service.
Feb 12 20:30:35.042746 google-clock-skew[1261]: INFO Starting Google Clock Skew daemon.
Feb 12 20:30:35.052503 sshd[1274]: Accepted publickey for core from 147.75.109.163 port 42018 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc
Feb 12 20:30:35.053512 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 20:30:35.061663 systemd[1]: Started session-2.scope.
Feb 12 20:30:35.063056 systemd-logind[1129]: New session 2 of user core.
Feb 12 20:30:35.064866 google-networking[1262]: INFO Starting Google Networking daemon.
Feb 12 20:30:35.082521 google-clock-skew[1261]: INFO Clock drift token has changed: 0.
Feb 12 20:30:35.094247 systemd-nspawn[1217]: hwclock: Cannot access the Hardware Clock via any known method.
Feb 12 20:30:35.094247 systemd-nspawn[1217]: hwclock: Use the --verbose option to see the details of our search for an access method.
Feb 12 20:30:35.095138 google-clock-skew[1261]: WARNING Failed to sync system time with hardware clock.
Feb 12 20:30:35.181022 groupadd[1285]: group added to /etc/group: name=google-sudoers, GID=1000
Feb 12 20:30:35.185124 groupadd[1285]: group added to /etc/gshadow: name=google-sudoers
Feb 12 20:30:35.189252 groupadd[1285]: new group: name=google-sudoers, GID=1000
Feb 12 20:30:35.204212 google-accounts[1260]: INFO Starting Google Accounts daemon.
Feb 12 20:30:35.234049 google-accounts[1260]: WARNING OS Login not installed.
Feb 12 20:30:35.235407 google-accounts[1260]: INFO Creating a new user account for 0.
Feb 12 20:30:35.245193 systemd-nspawn[1217]: useradd: invalid user name '0': use --badname to ignore
Feb 12 20:30:35.246010 google-accounts[1260]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3..
Feb 12 20:30:35.272075 sshd[1274]: pam_unix(sshd:session): session closed for user core
Feb 12 20:30:35.276392 systemd[1]: sshd@1-10.128.0.56:22-147.75.109.163:42018.service: Deactivated successfully.
Feb 12 20:30:35.277450 systemd[1]: session-2.scope: Deactivated successfully.
Feb 12 20:30:35.278307 systemd-logind[1129]: Session 2 logged out. Waiting for processes to exit.
Feb 12 20:30:35.279527 systemd-logind[1129]: Removed session 2.
Feb 12 20:30:35.320054 systemd[1]: Started sshd@2-10.128.0.56:22-147.75.109.163:42032.service.
Feb 12 20:30:35.610831 sshd[1298]: Accepted publickey for core from 147.75.109.163 port 42032 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc
Feb 12 20:30:35.612689 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 20:30:35.619269 systemd[1]: Started session-3.scope.
Feb 12 20:30:35.619863 systemd-logind[1129]: New session 3 of user core.
Feb 12 20:30:35.822055 sshd[1298]: pam_unix(sshd:session): session closed for user core
Feb 12 20:30:35.826103 systemd[1]: sshd@2-10.128.0.56:22-147.75.109.163:42032.service: Deactivated successfully.
Feb 12 20:30:35.827183 systemd[1]: session-3.scope: Deactivated successfully.
Feb 12 20:30:35.828019 systemd-logind[1129]: Session 3 logged out. Waiting for processes to exit.
Feb 12 20:30:35.829318 systemd-logind[1129]: Removed session 3.
Feb 12 20:30:35.865901 systemd[1]: Started sshd@3-10.128.0.56:22-147.75.109.163:42044.service.
Feb 12 20:30:36.147226 sshd[1304]: Accepted publickey for core from 147.75.109.163 port 42044 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc
Feb 12 20:30:36.148935 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 20:30:36.154823 systemd-logind[1129]: New session 4 of user core.
Feb 12 20:30:36.155570 systemd[1]: Started session-4.scope.
Feb 12 20:30:36.359949 sshd[1304]: pam_unix(sshd:session): session closed for user core
Feb 12 20:30:36.364113 systemd-logind[1129]: Session 4 logged out. Waiting for processes to exit.
Feb 12 20:30:36.364544 systemd[1]: sshd@3-10.128.0.56:22-147.75.109.163:42044.service: Deactivated successfully.
Feb 12 20:30:36.365646 systemd[1]: session-4.scope: Deactivated successfully.
Feb 12 20:30:36.366904 systemd-logind[1129]: Removed session 4.
Feb 12 20:30:36.405881 systemd[1]: Started sshd@4-10.128.0.56:22-147.75.109.163:42046.service.
Feb 12 20:30:36.675886 systemd[1]: Started sshd@5-10.128.0.56:22-178.128.91.222:42622.service.
Feb 12 20:30:36.691072 sshd[1310]: Accepted publickey for core from 147.75.109.163 port 42046 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc
Feb 12 20:30:36.692876 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb 12 20:30:36.699064 systemd-logind[1129]: New session 5 of user core.
Feb 12 20:30:36.699382 systemd[1]: Started session-5.scope.
Feb 12 20:30:36.889912 sudo[1316]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 12 20:30:36.890324 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb 12 20:30:37.501210 systemd[1]: Reloading.
Feb 12 20:30:37.614930 /usr/lib/systemd/system-generators/torcx-generator[1346]: time="2024-02-12T20:30:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 20:30:37.614992 /usr/lib/systemd/system-generators/torcx-generator[1346]: time="2024-02-12T20:30:37Z" level=info msg="torcx already run"
Feb 12 20:30:37.716010 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 20:30:37.716257 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 20:30:37.742636 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 20:30:37.821258 sshd[1313]: Failed password for root from 178.128.91.222 port 42622 ssh2
Feb 12 20:30:37.879177 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb 12 20:30:37.888537 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb 12 20:30:37.889379 systemd[1]: Reached target network-online.target.
Feb 12 20:30:37.891635 systemd[1]: Started kubelet.service.
Feb 12 20:30:37.912148 systemd[1]: Starting coreos-metadata.service...
Feb 12 20:30:37.992579 kubelet[1391]: E0212 20:30:37.992507    1391 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"
Feb 12 20:30:37.994321 coreos-metadata[1399]: Feb 12 20:30:37.994 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1
Feb 12 20:30:37.995408 coreos-metadata[1399]: Feb 12 20:30:37.995 INFO Fetch successful
Feb 12 20:30:37.995408 coreos-metadata[1399]: Feb 12 20:30:37.995 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1
Feb 12 20:30:37.995993 coreos-metadata[1399]: Feb 12 20:30:37.995 INFO Fetch successful
Feb 12 20:30:37.995993 coreos-metadata[1399]: Feb 12 20:30:37.995 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1
Feb 12 20:30:37.996000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 20:30:37.996261 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 12 20:30:37.996652 coreos-metadata[1399]: Feb 12 20:30:37.996 INFO Fetch successful
Feb 12 20:30:37.996652 coreos-metadata[1399]: Feb 12 20:30:37.996 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1
Feb 12 20:30:37.997054 coreos-metadata[1399]: Feb 12 20:30:37.997 INFO Fetch successful
Feb 12 20:30:38.007525 systemd[1]: Finished coreos-metadata.service.
Feb 12 20:30:38.042369 sshd[1313]: Received disconnect from 178.128.91.222 port 42622:11: Bye Bye [preauth]
Feb 12 20:30:38.042369 sshd[1313]: Disconnected from authenticating user root 178.128.91.222 port 42622 [preauth]
Feb 12 20:30:38.044048 systemd[1]: sshd@5-10.128.0.56:22-178.128.91.222:42622.service: Deactivated successfully.
Feb 12 20:30:38.439556 systemd[1]: Stopped kubelet.service.
Feb 12 20:30:38.463240 systemd[1]: Reloading.
Feb 12 20:30:38.565753 /usr/lib/systemd/system-generators/torcx-generator[1455]: time="2024-02-12T20:30:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb 12 20:30:38.565805 /usr/lib/systemd/system-generators/torcx-generator[1455]: time="2024-02-12T20:30:38Z" level=info msg="torcx already run"
Feb 12 20:30:38.667822 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb 12 20:30:38.667851 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 12 20:30:38.694360 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 12 20:30:38.813394 systemd[1]: Started kubelet.service.
Feb 12 20:30:38.876490 kubelet[1499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 20:30:38.876490 kubelet[1499]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 12 20:30:38.876490 kubelet[1499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 12 20:30:38.877106 kubelet[1499]: I0212 20:30:38.876581    1499 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 12 20:30:39.415354 kubelet[1499]: I0212 20:30:39.415297    1499 server.go:415] "Kubelet version" kubeletVersion="v1.27.2"
Feb 12 20:30:39.415354 kubelet[1499]: I0212 20:30:39.415339    1499 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 12 20:30:39.415692 kubelet[1499]: I0212 20:30:39.415652    1499 server.go:837] "Client rotation is on, will bootstrap in background"
Feb 12 20:30:39.418444 kubelet[1499]: I0212 20:30:39.418412    1499 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 12 20:30:39.423225 kubelet[1499]: I0212 20:30:39.423197    1499 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 12 20:30:39.423616 kubelet[1499]: I0212 20:30:39.423579    1499 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 12 20:30:39.423728 kubelet[1499]: I0212 20:30:39.423703    1499 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb 12 20:30:39.423728 kubelet[1499]: I0212 20:30:39.423727    1499 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb 12 20:30:39.423956 kubelet[1499]: I0212 20:30:39.423747    1499 container_manager_linux.go:302] "Creating device plugin manager"
Feb 12 20:30:39.423956 kubelet[1499]: I0212 20:30:39.423921    1499 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 20:30:39.432909 kubelet[1499]: I0212 20:30:39.432875    1499 kubelet.go:405] "Attempting to sync node with API server"
Feb 12 20:30:39.432909 kubelet[1499]: I0212 20:30:39.432909    1499 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 12 20:30:39.433171 kubelet[1499]: I0212 20:30:39.432937    1499 kubelet.go:309] "Adding apiserver pod source"
Feb 12 20:30:39.433171 kubelet[1499]: I0212 20:30:39.432958    1499 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 12 20:30:39.433362 kubelet[1499]: E0212 20:30:39.433342    1499 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:39.433504 kubelet[1499]: E0212 20:30:39.433486    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:39.434288 kubelet[1499]: I0212 20:30:39.434214    1499 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb 12 20:30:39.437755 kubelet[1499]: W0212 20:30:39.437719    1499 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 12 20:30:39.438428 kubelet[1499]: I0212 20:30:39.438395    1499 server.go:1168] "Started kubelet"
Feb 12 20:30:39.438645 kubelet[1499]: I0212 20:30:39.438625    1499 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 12 20:30:39.440515 kubelet[1499]: I0212 20:30:39.440493    1499 server.go:461] "Adding debug handlers to kubelet server"
Feb 12 20:30:39.450255 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb 12 20:30:39.450464 kubelet[1499]: I0212 20:30:39.438676    1499 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb 12 20:30:39.450868 kubelet[1499]: I0212 20:30:39.450524    1499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 12 20:30:39.454032 kubelet[1499]: E0212 20:30:39.453988    1499 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb 12 20:30:39.454578 kubelet[1499]: E0212 20:30:39.454031    1499 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 12 20:30:39.460777 kubelet[1499]: E0212 20:30:39.460643    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379eedd9853c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 438366012, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 438366012, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.461038 kubelet[1499]: W0212 20:30:39.460945    1499 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.128.0.56" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 12 20:30:39.461038 kubelet[1499]: E0212 20:30:39.461015    1499 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.56" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 12 20:30:39.461181 kubelet[1499]: W0212 20:30:39.461082    1499 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 12 20:30:39.461181 kubelet[1499]: E0212 20:30:39.461098    1499 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 12 20:30:39.462179 kubelet[1499]: E0212 20:30:39.461828    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379eeec851ec", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 454015980, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 454015980, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.464247 kubelet[1499]: I0212 20:30:39.464054    1499 volume_manager.go:284] "Starting Kubelet Volume Manager"
Feb 12 20:30:39.464247 kubelet[1499]: I0212 20:30:39.464171    1499 desired_state_of_world_populator.go:145] "Desired state populator starts to run"
Feb 12 20:30:39.476007 kubelet[1499]: W0212 20:30:39.475247    1499 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 12 20:30:39.476007 kubelet[1499]: E0212 20:30:39.475282    1499 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 12 20:30:39.477104 kubelet[1499]: E0212 20:30:39.476543    1499 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.56\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 12 20:30:39.507687 kubelet[1499]: I0212 20:30:39.507637    1499 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 12 20:30:39.507687 kubelet[1499]: I0212 20:30:39.507667    1499 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 12 20:30:39.507928 kubelet[1499]: I0212 20:30:39.507705    1499 state_mem.go:36] "Initialized new in-memory state store"
Feb 12 20:30:39.508739 kubelet[1499]: E0212 20:30:39.508547    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1ea8a65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.56 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506590309, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506590309, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.510671 kubelet[1499]: E0212 20:30:39.509955    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eaa587", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.56 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506597255, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506597255, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.512019 kubelet[1499]: E0212 20:30:39.511089    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eab5a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.56 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506601384, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506601384, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.512019 kubelet[1499]: I0212 20:30:39.511217    1499 policy_none.go:49] "None policy: Start"
Feb 12 20:30:39.512710 kubelet[1499]: I0212 20:30:39.512377    1499 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb 12 20:30:39.512710 kubelet[1499]: I0212 20:30:39.512410    1499 state_mem.go:35] "Initializing new in-memory state store"
Feb 12 20:30:39.520843 systemd[1]: Created slice kubepods.slice.
Feb 12 20:30:39.527336 systemd[1]: Created slice kubepods-burstable.slice.
Feb 12 20:30:39.531824 systemd[1]: Created slice kubepods-besteffort.slice.
Feb 12 20:30:39.537157 kubelet[1499]: I0212 20:30:39.537123    1499 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 12 20:30:39.537484 kubelet[1499]: I0212 20:30:39.537457    1499 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 12 20:30:39.542400 kubelet[1499]: E0212 20:30:39.541419    1499 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.56\" not found"
Feb 12 20:30:39.544282 kubelet[1499]: E0212 20:30:39.544193    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef402cac9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 541734089, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 541734089, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.567460 kubelet[1499]: I0212 20:30:39.567424    1499 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.56"
Feb 12 20:30:39.568840 kubelet[1499]: E0212 20:30:39.568785    1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.56"
Feb 12 20:30:39.569476 kubelet[1499]: E0212 20:30:39.569382    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1ea8a65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.56 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506590309, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 567354393, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1ea8a65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.570558 kubelet[1499]: E0212 20:30:39.570464    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eaa587", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.56 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506597255, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 567369164, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1eaa587" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.574021 kubelet[1499]: E0212 20:30:39.571456    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eab5a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.56 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506601384, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 567373587, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1eab5a8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.604869 kubelet[1499]: I0212 20:30:39.604830    1499 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb 12 20:30:39.607257 kubelet[1499]: I0212 20:30:39.607219    1499 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb 12 20:30:39.607257 kubelet[1499]: I0212 20:30:39.607250    1499 status_manager.go:207] "Starting to sync pod status with apiserver"
Feb 12 20:30:39.607472 kubelet[1499]: I0212 20:30:39.607277    1499 kubelet.go:2257] "Starting kubelet main sync loop"
Feb 12 20:30:39.607472 kubelet[1499]: E0212 20:30:39.607338    1499 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 12 20:30:39.610497 kubelet[1499]: W0212 20:30:39.610464    1499 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb 12 20:30:39.610663 kubelet[1499]: E0212 20:30:39.610508    1499 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb 12 20:30:39.678859 kubelet[1499]: E0212 20:30:39.678735    1499 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.56\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms"
Feb 12 20:30:39.770206 kubelet[1499]: I0212 20:30:39.770171    1499 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.56"
Feb 12 20:30:39.771872 kubelet[1499]: E0212 20:30:39.771806    1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.56"
Feb 12 20:30:39.772164 kubelet[1499]: E0212 20:30:39.772069    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1ea8a65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.56 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506590309, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 770089728, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1ea8a65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.773378 kubelet[1499]: E0212 20:30:39.773282    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eaa587", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.56 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506597255, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 770117197, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1eaa587" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:39.774630 kubelet[1499]: E0212 20:30:39.774536    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eab5a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.56 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506601384, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 770122999, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1eab5a8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:40.081344 kubelet[1499]: E0212 20:30:40.081298    1499 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.56\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms"
Feb 12 20:30:40.172748 kubelet[1499]: I0212 20:30:40.172714    1499 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.56"
Feb 12 20:30:40.174318 kubelet[1499]: E0212 20:30:40.174266    1499 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.56"
Feb 12 20:30:40.174494 kubelet[1499]: E0212 20:30:40.174259    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1ea8a65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.56 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506590309, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 40, 172662281, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1ea8a65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:40.175678 kubelet[1499]: E0212 20:30:40.175589    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eaa587", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.56 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506597255, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 40, 172670595, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1eaa587" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:40.176786 kubelet[1499]: E0212 20:30:40.176709    1499 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.56.17b3379ef1eab5a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.56", UID:"10.128.0.56", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.56 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.56"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 30, 39, 506601384, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 30, 40, 172674887, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.56.17b3379ef1eab5a8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb 12 20:30:40.418180 kubelet[1499]: I0212 20:30:40.417839    1499 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 12 20:30:40.434301 kubelet[1499]: E0212 20:30:40.434255    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:40.819769 kubelet[1499]: E0212 20:30:40.819648    1499 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.56" not found
Feb 12 20:30:40.887547 kubelet[1499]: E0212 20:30:40.887505    1499 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.56\" not found" node="10.128.0.56"
Feb 12 20:30:40.976052 kubelet[1499]: I0212 20:30:40.976022    1499 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.56"
Feb 12 20:30:40.980906 kubelet[1499]: I0212 20:30:40.980869    1499 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.56"
Feb 12 20:30:41.001192 kubelet[1499]: I0212 20:30:41.001155    1499 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 12 20:30:41.001735 env[1148]: time="2024-02-12T20:30:41.001684767Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 12 20:30:41.002317 kubelet[1499]: I0212 20:30:41.001963    1499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 12 20:30:41.333197 sudo[1316]: pam_unix(sudo:session): session closed for user root
Feb 12 20:30:41.377556 sshd[1310]: pam_unix(sshd:session): session closed for user core
Feb 12 20:30:41.381951 systemd[1]: sshd@4-10.128.0.56:22-147.75.109.163:42046.service: Deactivated successfully.
Feb 12 20:30:41.383176 systemd[1]: session-5.scope: Deactivated successfully.
Feb 12 20:30:41.384305 systemd-logind[1129]: Session 5 logged out. Waiting for processes to exit.
Feb 12 20:30:41.385565 systemd-logind[1129]: Removed session 5.
Feb 12 20:30:41.434801 kubelet[1499]: E0212 20:30:41.434750    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:41.434801 kubelet[1499]: I0212 20:30:41.434757    1499 apiserver.go:52] "Watching apiserver"
Feb 12 20:30:41.438700 kubelet[1499]: I0212 20:30:41.438647    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:30:41.438894 kubelet[1499]: I0212 20:30:41.438807    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:30:41.446106 systemd[1]: Created slice kubepods-besteffort-pod3619f15a_b873_4a52_9a80_c7d0f3e7b098.slice.
Feb 12 20:30:41.458122 systemd[1]: Created slice kubepods-burstable-pod5666768d_2f08_4c77_9a26_ddefddcba6f8.slice.
Feb 12 20:30:41.465527 kubelet[1499]: I0212 20:30:41.465487    1499 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
Feb 12 20:30:41.476194 kubelet[1499]: I0212 20:30:41.476141    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-hostproc\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.476644 kubelet[1499]: I0212 20:30:41.476600    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-hubble-tls\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.476884 kubelet[1499]: I0212 20:30:41.476866    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-run\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477125 kubelet[1499]: I0212 20:30:41.477094    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-bpf-maps\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477271 kubelet[1499]: I0212 20:30:41.477251    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5666768d-2f08-4c77-9a26-ddefddcba6f8-clustermesh-secrets\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477360 kubelet[1499]: I0212 20:30:41.477341    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-config-path\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477485 kubelet[1499]: I0212 20:30:41.477450    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-cgroup\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477593 kubelet[1499]: I0212 20:30:41.477544    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-lib-modules\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477674 kubelet[1499]: I0212 20:30:41.477622    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-net\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477764 kubelet[1499]: I0212 20:30:41.477705    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-kernel\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.477841 kubelet[1499]: I0212 20:30:41.477790    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3619f15a-b873-4a52-9a80-c7d0f3e7b098-kube-proxy\") pod \"kube-proxy-lfz8r\" (UID: \"3619f15a-b873-4a52-9a80-c7d0f3e7b098\") " pod="kube-system/kube-proxy-lfz8r"
Feb 12 20:30:41.477905 kubelet[1499]: I0212 20:30:41.477867    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3619f15a-b873-4a52-9a80-c7d0f3e7b098-xtables-lock\") pod \"kube-proxy-lfz8r\" (UID: \"3619f15a-b873-4a52-9a80-c7d0f3e7b098\") " pod="kube-system/kube-proxy-lfz8r"
Feb 12 20:30:41.477960 kubelet[1499]: I0212 20:30:41.477942    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrvx\" (UniqueName: \"kubernetes.io/projected/3619f15a-b873-4a52-9a80-c7d0f3e7b098-kube-api-access-5xrvx\") pod \"kube-proxy-lfz8r\" (UID: \"3619f15a-b873-4a52-9a80-c7d0f3e7b098\") " pod="kube-system/kube-proxy-lfz8r"
Feb 12 20:30:41.478048 kubelet[1499]: I0212 20:30:41.478027    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cni-path\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.478119 kubelet[1499]: I0212 20:30:41.478100    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-etc-cni-netd\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.478217 kubelet[1499]: I0212 20:30:41.478200    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3619f15a-b873-4a52-9a80-c7d0f3e7b098-lib-modules\") pod \"kube-proxy-lfz8r\" (UID: \"3619f15a-b873-4a52-9a80-c7d0f3e7b098\") " pod="kube-system/kube-proxy-lfz8r"
Feb 12 20:30:41.478285 kubelet[1499]: I0212 20:30:41.478277    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-xtables-lock\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.478377 kubelet[1499]: I0212 20:30:41.478361    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9d6\" (UniqueName: \"kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-kube-api-access-6x9d6\") pod \"cilium-42gb8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") " pod="kube-system/cilium-42gb8"
Feb 12 20:30:41.478436 kubelet[1499]: I0212 20:30:41.478394    1499 reconciler.go:41] "Reconciler: start to sync state"
Feb 12 20:30:41.756487 env[1148]: time="2024-02-12T20:30:41.756335552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfz8r,Uid:3619f15a-b873-4a52-9a80-c7d0f3e7b098,Namespace:kube-system,Attempt:0,}"
Feb 12 20:30:41.767175 env[1148]: time="2024-02-12T20:30:41.767119395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-42gb8,Uid:5666768d-2f08-4c77-9a26-ddefddcba6f8,Namespace:kube-system,Attempt:0,}"
Feb 12 20:30:42.270727 env[1148]: time="2024-02-12T20:30:42.270653655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.272149 env[1148]: time="2024-02-12T20:30:42.272099391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.277022 env[1148]: time="2024-02-12T20:30:42.276939331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.278238 env[1148]: time="2024-02-12T20:30:42.278183971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.279394 env[1148]: time="2024-02-12T20:30:42.279358781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.281486 env[1148]: time="2024-02-12T20:30:42.281436194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.282861 env[1148]: time="2024-02-12T20:30:42.282824685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.288154 env[1148]: time="2024-02-12T20:30:42.288102564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:42.318354 env[1148]: time="2024-02-12T20:30:42.318228394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:30:42.318354 env[1148]: time="2024-02-12T20:30:42.318288951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:30:42.318751 env[1148]: time="2024-02-12T20:30:42.318315282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:30:42.319111 env[1148]: time="2024-02-12T20:30:42.319037803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a93d453cc01f406827d5468d2db9a58e0d58b9a5c1a93f4385865b1e88b9460 pid=1549 runtime=io.containerd.runc.v2
Feb 12 20:30:42.327940 env[1148]: time="2024-02-12T20:30:42.327819503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:30:42.328207 env[1148]: time="2024-02-12T20:30:42.328146205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:30:42.328429 env[1148]: time="2024-02-12T20:30:42.328376006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:30:42.328885 env[1148]: time="2024-02-12T20:30:42.328813959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb pid=1562 runtime=io.containerd.runc.v2
Feb 12 20:30:42.340222 systemd[1]: Started cri-containerd-6a93d453cc01f406827d5468d2db9a58e0d58b9a5c1a93f4385865b1e88b9460.scope.
Feb 12 20:30:42.374172 systemd[1]: Started cri-containerd-682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb.scope.
Feb 12 20:30:42.414563 env[1148]: time="2024-02-12T20:30:42.414505296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfz8r,Uid:3619f15a-b873-4a52-9a80-c7d0f3e7b098,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a93d453cc01f406827d5468d2db9a58e0d58b9a5c1a93f4385865b1e88b9460\""
Feb 12 20:30:42.420301 kubelet[1499]: E0212 20:30:42.420210    1499 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
Feb 12 20:30:42.420965 env[1148]: time="2024-02-12T20:30:42.420916110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\""
Feb 12 20:30:42.430760 env[1148]: time="2024-02-12T20:30:42.430683477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-42gb8,Uid:5666768d-2f08-4c77-9a26-ddefddcba6f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\""
Feb 12 20:30:42.435708 kubelet[1499]: E0212 20:30:42.435650    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:42.595041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042741269.mount: Deactivated successfully.
Feb 12 20:30:43.436686 kubelet[1499]: E0212 20:30:43.436632    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:43.443300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601894122.mount: Deactivated successfully.
Feb 12 20:30:44.106226 env[1148]: time="2024-02-12T20:30:44.106156540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:44.110111 env[1148]: time="2024-02-12T20:30:44.110049381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:44.112575 env[1148]: time="2024-02-12T20:30:44.112524108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:44.114875 env[1148]: time="2024-02-12T20:30:44.114827690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:44.115595 env[1148]: time="2024-02-12T20:30:44.115550921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\""
Feb 12 20:30:44.117699 env[1148]: time="2024-02-12T20:30:44.117665891Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 12 20:30:44.118901 env[1148]: time="2024-02-12T20:30:44.118850047Z" level=info msg="CreateContainer within sandbox \"6a93d453cc01f406827d5468d2db9a58e0d58b9a5c1a93f4385865b1e88b9460\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 12 20:30:44.142385 env[1148]: time="2024-02-12T20:30:44.142316502Z" level=info msg="CreateContainer within sandbox \"6a93d453cc01f406827d5468d2db9a58e0d58b9a5c1a93f4385865b1e88b9460\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bc9ed481332ec0ee84ad3a5de3c1c2ec9c8627a67e27929f1186c9d8e8aa94e\""
Feb 12 20:30:44.143437 env[1148]: time="2024-02-12T20:30:44.143390040Z" level=info msg="StartContainer for \"6bc9ed481332ec0ee84ad3a5de3c1c2ec9c8627a67e27929f1186c9d8e8aa94e\""
Feb 12 20:30:44.180151 systemd[1]: run-containerd-runc-k8s.io-6bc9ed481332ec0ee84ad3a5de3c1c2ec9c8627a67e27929f1186c9d8e8aa94e-runc.hgfDYf.mount: Deactivated successfully.
Feb 12 20:30:44.185345 systemd[1]: Started cri-containerd-6bc9ed481332ec0ee84ad3a5de3c1c2ec9c8627a67e27929f1186c9d8e8aa94e.scope.
Feb 12 20:30:44.238475 env[1148]: time="2024-02-12T20:30:44.238416154Z" level=info msg="StartContainer for \"6bc9ed481332ec0ee84ad3a5de3c1c2ec9c8627a67e27929f1186c9d8e8aa94e\" returns successfully"
Feb 12 20:30:44.437995 kubelet[1499]: E0212 20:30:44.437787    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:44.644485 kubelet[1499]: I0212 20:30:44.644443    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lfz8r" podStartSLOduration=2.945371538 podCreationTimestamp="2024-02-12 20:30:40 +0000 UTC" firstStartedPulling="2024-02-12 20:30:42.417299912 +0000 UTC m=+3.599355848" lastFinishedPulling="2024-02-12 20:30:44.116311565 +0000 UTC m=+5.298367506" observedRunningTime="2024-02-12 20:30:44.644164404 +0000 UTC m=+5.826220353" watchObservedRunningTime="2024-02-12 20:30:44.644383196 +0000 UTC m=+5.826439153"
Feb 12 20:30:45.438471 kubelet[1499]: E0212 20:30:45.438320    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:46.438950 kubelet[1499]: E0212 20:30:46.438895    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:47.440001 kubelet[1499]: E0212 20:30:47.439903    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:48.440437 kubelet[1499]: E0212 20:30:48.440354    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:49.440625 kubelet[1499]: E0212 20:30:49.440523    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:49.736068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649111129.mount: Deactivated successfully.
Feb 12 20:30:50.441552 kubelet[1499]: E0212 20:30:50.441477    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:51.442703 kubelet[1499]: E0212 20:30:51.442559    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:52.443528 kubelet[1499]: E0212 20:30:52.443454    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:53.013615 env[1148]: time="2024-02-12T20:30:53.013540463Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:53.016625 env[1148]: time="2024-02-12T20:30:53.016573648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:53.019494 env[1148]: time="2024-02-12T20:30:53.019425654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:30:53.020337 env[1148]: time="2024-02-12T20:30:53.020279761Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb 12 20:30:53.023363 env[1148]: time="2024-02-12T20:30:53.023309841Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 20:30:53.049750 env[1148]: time="2024-02-12T20:30:53.049647551Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\""
Feb 12 20:30:53.050534 env[1148]: time="2024-02-12T20:30:53.050477461Z" level=info msg="StartContainer for \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\""
Feb 12 20:30:53.086837 systemd[1]: run-containerd-runc-k8s.io-8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2-runc.ZVvoVZ.mount: Deactivated successfully.
Feb 12 20:30:53.091447 systemd[1]: Started cri-containerd-8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2.scope.
Feb 12 20:30:53.138115 env[1148]: time="2024-02-12T20:30:53.138051736Z" level=info msg="StartContainer for \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\" returns successfully"
Feb 12 20:30:53.148545 systemd[1]: cri-containerd-8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2.scope: Deactivated successfully.
Feb 12 20:30:53.443825 kubelet[1499]: E0212 20:30:53.443734    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:54.034613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2-rootfs.mount: Deactivated successfully.
Feb 12 20:30:54.444848 kubelet[1499]: E0212 20:30:54.444787    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:55.292220 env[1148]: time="2024-02-12T20:30:55.292133150Z" level=info msg="shim disconnected" id=8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2
Feb 12 20:30:55.292220 env[1148]: time="2024-02-12T20:30:55.292209993Z" level=warning msg="cleaning up after shim disconnected" id=8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2 namespace=k8s.io
Feb 12 20:30:55.292220 env[1148]: time="2024-02-12T20:30:55.292226052Z" level=info msg="cleaning up dead shim"
Feb 12 20:30:55.304343 env[1148]: time="2024-02-12T20:30:55.304253313Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1829 runtime=io.containerd.runc.v2\n"
Feb 12 20:30:55.446019 kubelet[1499]: E0212 20:30:55.445929    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:55.664563 env[1148]: time="2024-02-12T20:30:55.664490569Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 12 20:30:55.685051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067482781.mount: Deactivated successfully.
Feb 12 20:30:55.697459 env[1148]: time="2024-02-12T20:30:55.697384064Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\""
Feb 12 20:30:55.698344 env[1148]: time="2024-02-12T20:30:55.698262076Z" level=info msg="StartContainer for \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\""
Feb 12 20:30:55.724464 systemd[1]: Started cri-containerd-557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08.scope.
Feb 12 20:30:55.744543 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 12 20:30:55.771821 env[1148]: time="2024-02-12T20:30:55.771760889Z" level=info msg="StartContainer for \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\" returns successfully"
Feb 12 20:30:55.788058 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 12 20:30:55.788454 systemd[1]: Stopped systemd-sysctl.service.
Feb 12 20:30:55.789264 systemd[1]: Stopping systemd-sysctl.service...
Feb 12 20:30:55.794251 systemd[1]: Starting systemd-sysctl.service...
Feb 12 20:30:55.794785 systemd[1]: cri-containerd-557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08.scope: Deactivated successfully.
Feb 12 20:30:55.810845 systemd[1]: Finished systemd-sysctl.service.
Feb 12 20:30:55.834279 env[1148]: time="2024-02-12T20:30:55.834195875Z" level=info msg="shim disconnected" id=557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08
Feb 12 20:30:55.834279 env[1148]: time="2024-02-12T20:30:55.834263153Z" level=warning msg="cleaning up after shim disconnected" id=557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08 namespace=k8s.io
Feb 12 20:30:55.834279 env[1148]: time="2024-02-12T20:30:55.834279643Z" level=info msg="cleaning up dead shim"
Feb 12 20:30:55.845306 env[1148]: time="2024-02-12T20:30:55.845249037Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1894 runtime=io.containerd.runc.v2\n"
Feb 12 20:30:56.446534 kubelet[1499]: E0212 20:30:56.446476    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:56.668878 env[1148]: time="2024-02-12T20:30:56.668829475Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 12 20:30:56.682154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08-rootfs.mount: Deactivated successfully.
Feb 12 20:30:56.707228 env[1148]: time="2024-02-12T20:30:56.707065824Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\""
Feb 12 20:30:56.708265 env[1148]: time="2024-02-12T20:30:56.708218224Z" level=info msg="StartContainer for \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\""
Feb 12 20:30:56.739999 systemd[1]: Started cri-containerd-439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4.scope.
Feb 12 20:30:56.794682 systemd[1]: cri-containerd-439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4.scope: Deactivated successfully.
Feb 12 20:30:56.796440 env[1148]: time="2024-02-12T20:30:56.796389380Z" level=info msg="StartContainer for \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\" returns successfully"
Feb 12 20:30:56.830872 env[1148]: time="2024-02-12T20:30:56.830794255Z" level=info msg="shim disconnected" id=439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4
Feb 12 20:30:56.830872 env[1148]: time="2024-02-12T20:30:56.830865433Z" level=warning msg="cleaning up after shim disconnected" id=439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4 namespace=k8s.io
Feb 12 20:30:56.830872 env[1148]: time="2024-02-12T20:30:56.830881112Z" level=info msg="cleaning up dead shim"
Feb 12 20:30:56.843019 env[1148]: time="2024-02-12T20:30:56.842947104Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1951 runtime=io.containerd.runc.v2\n"
Feb 12 20:30:57.447538 kubelet[1499]: E0212 20:30:57.447463    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:57.674857 env[1148]: time="2024-02-12T20:30:57.674779766Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 12 20:30:57.681925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4-rootfs.mount: Deactivated successfully.
Feb 12 20:30:57.705454 env[1148]: time="2024-02-12T20:30:57.705135553Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\""
Feb 12 20:30:57.706178 env[1148]: time="2024-02-12T20:30:57.706130932Z" level=info msg="StartContainer for \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\""
Feb 12 20:30:57.736750 systemd[1]: Started cri-containerd-1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382.scope.
Feb 12 20:30:57.772476 systemd[1]: cri-containerd-1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382.scope: Deactivated successfully.
Feb 12 20:30:57.774795 env[1148]: time="2024-02-12T20:30:57.774744765Z" level=info msg="StartContainer for \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\" returns successfully"
Feb 12 20:30:57.804018 env[1148]: time="2024-02-12T20:30:57.803884976Z" level=info msg="shim disconnected" id=1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382
Feb 12 20:30:57.804325 env[1148]: time="2024-02-12T20:30:57.804041642Z" level=warning msg="cleaning up after shim disconnected" id=1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382 namespace=k8s.io
Feb 12 20:30:57.804325 env[1148]: time="2024-02-12T20:30:57.804060735Z" level=info msg="cleaning up dead shim"
Feb 12 20:30:57.817054 env[1148]: time="2024-02-12T20:30:57.816961168Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2004 runtime=io.containerd.runc.v2\n"
Feb 12 20:30:58.448322 kubelet[1499]: E0212 20:30:58.448277    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:58.543486 systemd[1]: Started sshd@6-10.128.0.56:22-36.99.163.171:34474.service.
Feb 12 20:30:58.682530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382-rootfs.mount: Deactivated successfully.
Feb 12 20:30:58.686906 env[1148]: time="2024-02-12T20:30:58.686847313Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 12 20:30:58.715027 env[1148]: time="2024-02-12T20:30:58.714653731Z" level=info msg="CreateContainer within sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\""
Feb 12 20:30:58.715791 env[1148]: time="2024-02-12T20:30:58.715737782Z" level=info msg="StartContainer for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\""
Feb 12 20:30:58.751411 systemd[1]: Started cri-containerd-d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c.scope.
Feb 12 20:30:58.796367 env[1148]: time="2024-02-12T20:30:58.796274544Z" level=info msg="StartContainer for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" returns successfully"
Feb 12 20:30:58.942398 kubelet[1499]: I0212 20:30:58.941449    1499 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb 12 20:30:59.309022 kernel: Initializing XFRM netlink socket
Feb 12 20:30:59.433586 kubelet[1499]: E0212 20:30:59.433513    1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:59.449020 kubelet[1499]: E0212 20:30:59.448915    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:30:59.682588 systemd[1]: run-containerd-runc-k8s.io-d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c-runc.FzyZNZ.mount: Deactivated successfully.
Feb 12 20:30:59.704069 kubelet[1499]: I0212 20:30:59.704017    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-42gb8" podStartSLOduration=9.116151393 podCreationTimestamp="2024-02-12 20:30:40 +0000 UTC" firstStartedPulling="2024-02-12 20:30:42.432906249 +0000 UTC m=+3.614962169" lastFinishedPulling="2024-02-12 20:30:53.020696157 +0000 UTC m=+14.202752095" observedRunningTime="2024-02-12 20:30:59.703862076 +0000 UTC m=+20.885918021" watchObservedRunningTime="2024-02-12 20:30:59.703941319 +0000 UTC m=+20.885997263"
Feb 12 20:31:00.449950 kubelet[1499]: E0212 20:31:00.449846    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:00.978558 systemd-networkd[1025]: cilium_host: Link UP
Feb 12 20:31:00.978757 systemd-networkd[1025]: cilium_net: Link UP
Feb 12 20:31:00.978764 systemd-networkd[1025]: cilium_net: Gained carrier
Feb 12 20:31:00.979012 systemd-networkd[1025]: cilium_host: Gained carrier
Feb 12 20:31:00.985042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb 12 20:31:00.986680 systemd-networkd[1025]: cilium_host: Gained IPv6LL
Feb 12 20:31:01.129773 systemd-networkd[1025]: cilium_vxlan: Link UP
Feb 12 20:31:01.129785 systemd-networkd[1025]: cilium_vxlan: Gained carrier
Feb 12 20:31:01.398066 kernel: NET: Registered PF_ALG protocol family
Feb 12 20:31:01.450138 kubelet[1499]: E0212 20:31:01.450078    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:01.968287 systemd-networkd[1025]: cilium_net: Gained IPv6LL
Feb 12 20:31:02.239762 systemd-networkd[1025]: lxc_health: Link UP
Feb 12 20:31:02.255123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb 12 20:31:02.255572 systemd-networkd[1025]: lxc_health: Gained carrier
Feb 12 20:31:02.450829 kubelet[1499]: E0212 20:31:02.450713    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:03.184402 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL
Feb 12 20:31:03.440518 systemd-networkd[1025]: lxc_health: Gained IPv6LL
Feb 12 20:31:03.451749 kubelet[1499]: E0212 20:31:03.451645    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:04.452823 kubelet[1499]: E0212 20:31:04.452767    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:05.453709 kubelet[1499]: E0212 20:31:05.453652    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:05.972915 kubelet[1499]: I0212 20:31:05.972856    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:31:05.981847 systemd[1]: Created slice kubepods-besteffort-pode896944e_918c_4c9e_8a50_df97085cc677.slice.
Feb 12 20:31:06.054367 kubelet[1499]: I0212 20:31:06.054303    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7qkx\" (UniqueName: \"kubernetes.io/projected/e896944e-918c-4c9e-8a50-df97085cc677-kube-api-access-k7qkx\") pod \"nginx-deployment-845c78c8b9-sq7k9\" (UID: \"e896944e-918c-4c9e-8a50-df97085cc677\") " pod="default/nginx-deployment-845c78c8b9-sq7k9"
Feb 12 20:31:06.291320 env[1148]: time="2024-02-12T20:31:06.290207470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-sq7k9,Uid:e896944e-918c-4c9e-8a50-df97085cc677,Namespace:default,Attempt:0,}"
Feb 12 20:31:06.373488 systemd-networkd[1025]: lxcf81ebbfec89a: Link UP
Feb 12 20:31:06.394138 kernel: eth0: renamed from tmpea785
Feb 12 20:31:06.410117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 20:31:06.420105 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf81ebbfec89a: link becomes ready
Feb 12 20:31:06.425384 systemd-networkd[1025]: lxcf81ebbfec89a: Gained carrier
Feb 12 20:31:06.455667 kubelet[1499]: E0212 20:31:06.455578    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:07.456600 kubelet[1499]: E0212 20:31:07.456547    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:07.849924 env[1148]: time="2024-02-12T20:31:07.849746105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:31:07.850524 env[1148]: time="2024-02-12T20:31:07.849904201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:31:07.850524 env[1148]: time="2024-02-12T20:31:07.850057179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:31:07.850675 env[1148]: time="2024-02-12T20:31:07.850515360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea785a20b5fa13aa8289f895332bdbd20a4cb4386f295bd5f4efc30dac39c7bf pid=2538 runtime=io.containerd.runc.v2
Feb 12 20:31:07.874313 systemd[1]: Started cri-containerd-ea785a20b5fa13aa8289f895332bdbd20a4cb4386f295bd5f4efc30dac39c7bf.scope.
Feb 12 20:31:07.935190 env[1148]: time="2024-02-12T20:31:07.935134598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-sq7k9,Uid:e896944e-918c-4c9e-8a50-df97085cc677,Namespace:default,Attempt:0,} returns sandbox id \"ea785a20b5fa13aa8289f895332bdbd20a4cb4386f295bd5f4efc30dac39c7bf\""
Feb 12 20:31:07.938064 env[1148]: time="2024-02-12T20:31:07.938024319Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 12 20:31:08.368601 systemd-networkd[1025]: lxcf81ebbfec89a: Gained IPv6LL
Feb 12 20:31:08.458436 kubelet[1499]: E0212 20:31:08.458371    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:09.459500 kubelet[1499]: E0212 20:31:09.459448    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:10.460304 kubelet[1499]: E0212 20:31:10.460256    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:10.753959 update_engine[1131]: I0212 20:31:10.753050  1131 update_attempter.cc:509] Updating boot flags...
Feb 12 20:31:10.807831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164983328.mount: Deactivated successfully.
Feb 12 20:31:11.460735 kubelet[1499]: E0212 20:31:11.460637    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:12.144262 env[1148]: time="2024-02-12T20:31:12.144189673Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:12.147685 env[1148]: time="2024-02-12T20:31:12.147623782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:12.150641 env[1148]: time="2024-02-12T20:31:12.150589608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:12.153602 env[1148]: time="2024-02-12T20:31:12.153549908Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:12.154510 env[1148]: time="2024-02-12T20:31:12.154445210Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb 12 20:31:12.157795 env[1148]: time="2024-02-12T20:31:12.157732866Z" level=info msg="CreateContainer within sandbox \"ea785a20b5fa13aa8289f895332bdbd20a4cb4386f295bd5f4efc30dac39c7bf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 12 20:31:12.174130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142128220.mount: Deactivated successfully.
Feb 12 20:31:12.188157 env[1148]: time="2024-02-12T20:31:12.188078318Z" level=info msg="CreateContainer within sandbox \"ea785a20b5fa13aa8289f895332bdbd20a4cb4386f295bd5f4efc30dac39c7bf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"400759cf2ebd882ebb1c3a8d60e5277b58469c7a709127d4e94f40fc05ef2664\""
Feb 12 20:31:12.189234 env[1148]: time="2024-02-12T20:31:12.189181140Z" level=info msg="StartContainer for \"400759cf2ebd882ebb1c3a8d60e5277b58469c7a709127d4e94f40fc05ef2664\""
Feb 12 20:31:12.221312 systemd[1]: Started cri-containerd-400759cf2ebd882ebb1c3a8d60e5277b58469c7a709127d4e94f40fc05ef2664.scope.
Feb 12 20:31:12.268821 env[1148]: time="2024-02-12T20:31:12.268757002Z" level=info msg="StartContainer for \"400759cf2ebd882ebb1c3a8d60e5277b58469c7a709127d4e94f40fc05ef2664\" returns successfully"
Feb 12 20:31:12.461418 kubelet[1499]: E0212 20:31:12.461257    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:12.725961 kubelet[1499]: I0212 20:31:12.725770    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-sq7k9" podStartSLOduration=3.50807562 podCreationTimestamp="2024-02-12 20:31:05 +0000 UTC" firstStartedPulling="2024-02-12 20:31:07.93728184 +0000 UTC m=+29.119337773" lastFinishedPulling="2024-02-12 20:31:12.154924945 +0000 UTC m=+33.336980868" observedRunningTime="2024-02-12 20:31:12.725186927 +0000 UTC m=+33.907242874" watchObservedRunningTime="2024-02-12 20:31:12.725718715 +0000 UTC m=+33.907774661"
Feb 12 20:31:13.462018 kubelet[1499]: E0212 20:31:13.461931    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:14.462393 kubelet[1499]: E0212 20:31:14.462313    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:15.463041 kubelet[1499]: E0212 20:31:15.462987    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:16.463311 kubelet[1499]: E0212 20:31:16.463239    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:17.463954 kubelet[1499]: E0212 20:31:17.463886    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:18.464261 kubelet[1499]: E0212 20:31:18.464187    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:19.433373 kubelet[1499]: E0212 20:31:19.433293    1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:19.465133 kubelet[1499]: E0212 20:31:19.465060    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:20.018051 kubelet[1499]: I0212 20:31:20.017990    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:31:20.025652 systemd[1]: Created slice kubepods-besteffort-pod82b93625_b35e_4541_997c_80aec5c036c4.slice.
Feb 12 20:31:20.058612 kubelet[1499]: I0212 20:31:20.058545    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg56k\" (UniqueName: \"kubernetes.io/projected/82b93625-b35e-4541-997c-80aec5c036c4-kube-api-access-sg56k\") pod \"nfs-server-provisioner-0\" (UID: \"82b93625-b35e-4541-997c-80aec5c036c4\") " pod="default/nfs-server-provisioner-0"
Feb 12 20:31:20.058612 kubelet[1499]: I0212 20:31:20.058608    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/82b93625-b35e-4541-997c-80aec5c036c4-data\") pod \"nfs-server-provisioner-0\" (UID: \"82b93625-b35e-4541-997c-80aec5c036c4\") " pod="default/nfs-server-provisioner-0"
Feb 12 20:31:20.330601 env[1148]: time="2024-02-12T20:31:20.330541329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:82b93625-b35e-4541-997c-80aec5c036c4,Namespace:default,Attempt:0,}"
Feb 12 20:31:20.384399 systemd-networkd[1025]: lxc559e13109aa8: Link UP
Feb 12 20:31:20.398103 kernel: eth0: renamed from tmp9c548
Feb 12 20:31:20.416527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 20:31:20.416695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc559e13109aa8: link becomes ready
Feb 12 20:31:20.417027 systemd-networkd[1025]: lxc559e13109aa8: Gained carrier
Feb 12 20:31:20.465704 kubelet[1499]: E0212 20:31:20.465629    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:20.709388 env[1148]: time="2024-02-12T20:31:20.709202655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:31:20.709388 env[1148]: time="2024-02-12T20:31:20.709258463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:31:20.709809 env[1148]: time="2024-02-12T20:31:20.709276592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:31:20.716818 env[1148]: time="2024-02-12T20:31:20.710016282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c54874e63e9d88124fb66d2f41e8b4006c5bbd52b38643d94e61b9821e20b79 pid=2684 runtime=io.containerd.runc.v2
Feb 12 20:31:20.736956 systemd[1]: Started cri-containerd-9c54874e63e9d88124fb66d2f41e8b4006c5bbd52b38643d94e61b9821e20b79.scope.
Feb 12 20:31:20.798789 env[1148]: time="2024-02-12T20:31:20.798258893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:82b93625-b35e-4541-997c-80aec5c036c4,Namespace:default,Attempt:0,} returns sandbox id \"9c54874e63e9d88124fb66d2f41e8b4006c5bbd52b38643d94e61b9821e20b79\""
Feb 12 20:31:20.800870 env[1148]: time="2024-02-12T20:31:20.800816040Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 12 20:31:21.466141 kubelet[1499]: E0212 20:31:21.466057    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:21.936724 systemd-networkd[1025]: lxc559e13109aa8: Gained IPv6LL
Feb 12 20:31:22.466290 kubelet[1499]: E0212 20:31:22.466239    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:23.360672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061185700.mount: Deactivated successfully.
Feb 12 20:31:23.467271 kubelet[1499]: E0212 20:31:23.467179    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:24.467444 kubelet[1499]: E0212 20:31:24.467387    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:25.468019 kubelet[1499]: E0212 20:31:25.467916    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:25.793284 env[1148]: time="2024-02-12T20:31:25.793088034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:25.796670 env[1148]: time="2024-02-12T20:31:25.796613061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:25.800435 env[1148]: time="2024-02-12T20:31:25.800379761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:25.803632 env[1148]: time="2024-02-12T20:31:25.803576931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:25.805115 env[1148]: time="2024-02-12T20:31:25.805060214Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb 12 20:31:25.808807 env[1148]: time="2024-02-12T20:31:25.808753294Z" level=info msg="CreateContainer within sandbox \"9c54874e63e9d88124fb66d2f41e8b4006c5bbd52b38643d94e61b9821e20b79\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 12 20:31:25.827586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049205536.mount: Deactivated successfully.
Feb 12 20:31:25.844400 env[1148]: time="2024-02-12T20:31:25.844325205Z" level=info msg="CreateContainer within sandbox \"9c54874e63e9d88124fb66d2f41e8b4006c5bbd52b38643d94e61b9821e20b79\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b7ebf8b7e6172ea2614bb6b1640a8186535f164ed3e705e7ae35372ae3bc7965\""
Feb 12 20:31:25.845528 env[1148]: time="2024-02-12T20:31:25.845484948Z" level=info msg="StartContainer for \"b7ebf8b7e6172ea2614bb6b1640a8186535f164ed3e705e7ae35372ae3bc7965\""
Feb 12 20:31:25.875235 systemd[1]: Started cri-containerd-b7ebf8b7e6172ea2614bb6b1640a8186535f164ed3e705e7ae35372ae3bc7965.scope.
Feb 12 20:31:25.927299 env[1148]: time="2024-02-12T20:31:25.927230162Z" level=info msg="StartContainer for \"b7ebf8b7e6172ea2614bb6b1640a8186535f164ed3e705e7ae35372ae3bc7965\" returns successfully"
Feb 12 20:31:26.468875 kubelet[1499]: E0212 20:31:26.468823    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:26.773700 kubelet[1499]: I0212 20:31:26.773243    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.768058204 podCreationTimestamp="2024-02-12 20:31:20 +0000 UTC" firstStartedPulling="2024-02-12 20:31:20.800368983 +0000 UTC m=+41.982424917" lastFinishedPulling="2024-02-12 20:31:25.805478117 +0000 UTC m=+46.987534058" observedRunningTime="2024-02-12 20:31:26.772886001 +0000 UTC m=+47.954941949" watchObservedRunningTime="2024-02-12 20:31:26.773167345 +0000 UTC m=+47.955223290"
Feb 12 20:31:27.469651 kubelet[1499]: E0212 20:31:27.469577    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:28.470856 kubelet[1499]: E0212 20:31:28.470764    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:29.471144 kubelet[1499]: E0212 20:31:29.471074    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:30.472249 kubelet[1499]: E0212 20:31:30.472174    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:31.473388 kubelet[1499]: E0212 20:31:31.473317    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:32.474336 kubelet[1499]: E0212 20:31:32.474258    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:33.475023 kubelet[1499]: E0212 20:31:33.474938    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:34.475536 kubelet[1499]: E0212 20:31:34.475465    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:35.476472 kubelet[1499]: E0212 20:31:35.476394    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:35.843655 systemd[1]: Started sshd@7-10.128.0.56:22-178.128.91.222:52322.service.
Feb 12 20:31:35.855226 kubelet[1499]: I0212 20:31:35.854615    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:31:35.862996 systemd[1]: Created slice kubepods-besteffort-pod586538ab_2186_4888_ade2_7fea9adaa688.slice.
Feb 12 20:31:35.965367 kubelet[1499]: I0212 20:31:35.965292    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b724103a-172d-4aca-a3f1-8d31e83a052a\" (UniqueName: \"kubernetes.io/nfs/586538ab-2186-4888-ade2-7fea9adaa688-pvc-b724103a-172d-4aca-a3f1-8d31e83a052a\") pod \"test-pod-1\" (UID: \"586538ab-2186-4888-ade2-7fea9adaa688\") " pod="default/test-pod-1"
Feb 12 20:31:35.965588 kubelet[1499]: I0212 20:31:35.965387    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwb2f\" (UniqueName: \"kubernetes.io/projected/586538ab-2186-4888-ade2-7fea9adaa688-kube-api-access-xwb2f\") pod \"test-pod-1\" (UID: \"586538ab-2186-4888-ade2-7fea9adaa688\") " pod="default/test-pod-1"
Feb 12 20:31:36.109014 kernel: FS-Cache: Loaded
Feb 12 20:31:36.163623 kernel: RPC: Registered named UNIX socket transport module.
Feb 12 20:31:36.163800 kernel: RPC: Registered udp transport module.
Feb 12 20:31:36.163846 kernel: RPC: Registered tcp transport module.
Feb 12 20:31:36.175184 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 12 20:31:36.233003 kernel: FS-Cache: Netfs 'nfs' registered for caching
Feb 12 20:31:36.473484 kernel: NFS: Registering the id_resolver key type
Feb 12 20:31:36.473672 kernel: Key type id_resolver registered
Feb 12 20:31:36.473722 kernel: Key type id_legacy registered
Feb 12 20:31:36.477580 kubelet[1499]: E0212 20:31:36.477537    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:36.530713 nfsidmap[2803]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal'
Feb 12 20:31:36.545126 nfsidmap[2804]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal'
Feb 12 20:31:36.768091 env[1148]: time="2024-02-12T20:31:36.767926596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:586538ab-2186-4888-ade2-7fea9adaa688,Namespace:default,Attempt:0,}"
Feb 12 20:31:36.818799 systemd-networkd[1025]: lxc1ee9e23b79c8: Link UP
Feb 12 20:31:36.832108 kernel: eth0: renamed from tmp2bece
Feb 12 20:31:36.860352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb 12 20:31:36.860500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1ee9e23b79c8: link becomes ready
Feb 12 20:31:36.860801 systemd-networkd[1025]: lxc1ee9e23b79c8: Gained carrier
Feb 12 20:31:36.992130 sshd[2785]: Failed password for root from 178.128.91.222 port 52322 ssh2
Feb 12 20:31:37.142060 env[1148]: time="2024-02-12T20:31:37.141888430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:31:37.142060 env[1148]: time="2024-02-12T20:31:37.141954591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:31:37.143255 env[1148]: time="2024-02-12T20:31:37.143171790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:31:37.143643 env[1148]: time="2024-02-12T20:31:37.143582638Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bece14bb6c2ead22de14b8c768d7a7a69b22f6eb560613233dd4e75c2ffec6e pid=2833 runtime=io.containerd.runc.v2
Feb 12 20:31:37.172051 systemd[1]: run-containerd-runc-k8s.io-2bece14bb6c2ead22de14b8c768d7a7a69b22f6eb560613233dd4e75c2ffec6e-runc.VLrJCw.mount: Deactivated successfully.
Feb 12 20:31:37.179525 systemd[1]: Started cri-containerd-2bece14bb6c2ead22de14b8c768d7a7a69b22f6eb560613233dd4e75c2ffec6e.scope.
Feb 12 20:31:37.211256 sshd[2785]: Received disconnect from 178.128.91.222 port 52322:11: Bye Bye [preauth]
Feb 12 20:31:37.211256 sshd[2785]: Disconnected from authenticating user root 178.128.91.222 port 52322 [preauth]
Feb 12 20:31:37.213527 systemd[1]: sshd@7-10.128.0.56:22-178.128.91.222:52322.service: Deactivated successfully.
Feb 12 20:31:37.242561 env[1148]: time="2024-02-12T20:31:37.242502329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:586538ab-2186-4888-ade2-7fea9adaa688,Namespace:default,Attempt:0,} returns sandbox id \"2bece14bb6c2ead22de14b8c768d7a7a69b22f6eb560613233dd4e75c2ffec6e\""
Feb 12 20:31:37.244760 env[1148]: time="2024-02-12T20:31:37.244639013Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 12 20:31:37.479433 kubelet[1499]: E0212 20:31:37.479274    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:37.523118 env[1148]: time="2024-02-12T20:31:37.523053197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:37.526684 env[1148]: time="2024-02-12T20:31:37.526626122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:37.529650 env[1148]: time="2024-02-12T20:31:37.529601860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:37.532611 env[1148]: time="2024-02-12T20:31:37.532559016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:37.533534 env[1148]: time="2024-02-12T20:31:37.533481589Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb 12 20:31:37.536854 env[1148]: time="2024-02-12T20:31:37.536807186Z" level=info msg="CreateContainer within sandbox \"2bece14bb6c2ead22de14b8c768d7a7a69b22f6eb560613233dd4e75c2ffec6e\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 12 20:31:37.555242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902978125.mount: Deactivated successfully.
Feb 12 20:31:37.565779 env[1148]: time="2024-02-12T20:31:37.565711879Z" level=info msg="CreateContainer within sandbox \"2bece14bb6c2ead22de14b8c768d7a7a69b22f6eb560613233dd4e75c2ffec6e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6fb322465bc94879997b9f316dae717dae0628ac7cb0d46eea9e9e936e2b93a9\""
Feb 12 20:31:37.567261 env[1148]: time="2024-02-12T20:31:37.567202214Z" level=info msg="StartContainer for \"6fb322465bc94879997b9f316dae717dae0628ac7cb0d46eea9e9e936e2b93a9\""
Feb 12 20:31:37.589789 systemd[1]: Started cri-containerd-6fb322465bc94879997b9f316dae717dae0628ac7cb0d46eea9e9e936e2b93a9.scope.
Feb 12 20:31:37.633503 env[1148]: time="2024-02-12T20:31:37.633439326Z" level=info msg="StartContainer for \"6fb322465bc94879997b9f316dae717dae0628ac7cb0d46eea9e9e936e2b93a9\" returns successfully"
Feb 12 20:31:37.802889 kubelet[1499]: I0212 20:31:37.802735    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.513134989 podCreationTimestamp="2024-02-12 20:31:20 +0000 UTC" firstStartedPulling="2024-02-12 20:31:37.244323143 +0000 UTC m=+58.426379063" lastFinishedPulling="2024-02-12 20:31:37.533889034 +0000 UTC m=+58.715944971" observedRunningTime="2024-02-12 20:31:37.802496879 +0000 UTC m=+58.984552821" watchObservedRunningTime="2024-02-12 20:31:37.802700897 +0000 UTC m=+58.984756842"
Feb 12 20:31:38.320326 systemd-networkd[1025]: lxc1ee9e23b79c8: Gained IPv6LL
Feb 12 20:31:38.479543 kubelet[1499]: E0212 20:31:38.479474    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:39.434189 kubelet[1499]: E0212 20:31:39.434104    1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:39.480133 kubelet[1499]: E0212 20:31:39.480069    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:40.480352 kubelet[1499]: E0212 20:31:40.480271    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:41.010870 systemd[1]: run-containerd-runc-k8s.io-d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c-runc.6jwW7U.mount: Deactivated successfully.
Feb 12 20:31:41.035604 env[1148]: time="2024-02-12T20:31:41.035519933Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 12 20:31:41.045249 env[1148]: time="2024-02-12T20:31:41.045198743Z" level=info msg="StopContainer for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" with timeout 1 (s)"
Feb 12 20:31:41.045899 env[1148]: time="2024-02-12T20:31:41.045854636Z" level=info msg="Stop container \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" with signal terminated"
Feb 12 20:31:41.056932 systemd-networkd[1025]: lxc_health: Link DOWN
Feb 12 20:31:41.056954 systemd-networkd[1025]: lxc_health: Lost carrier
Feb 12 20:31:41.079684 systemd[1]: cri-containerd-d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c.scope: Deactivated successfully.
Feb 12 20:31:41.080125 systemd[1]: cri-containerd-d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c.scope: Consumed 8.715s CPU time.
Feb 12 20:31:41.109655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c-rootfs.mount: Deactivated successfully.
Feb 12 20:31:41.480922 kubelet[1499]: E0212 20:31:41.480856    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:42.059611 env[1148]: time="2024-02-12T20:31:42.059518515Z" level=info msg="Kill container \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\""
Feb 12 20:31:42.452829 env[1148]: time="2024-02-12T20:31:42.452765403Z" level=info msg="shim disconnected" id=d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c
Feb 12 20:31:42.452829 env[1148]: time="2024-02-12T20:31:42.452827074Z" level=warning msg="cleaning up after shim disconnected" id=d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c namespace=k8s.io
Feb 12 20:31:42.453171 env[1148]: time="2024-02-12T20:31:42.452842477Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:42.464988 env[1148]: time="2024-02-12T20:31:42.464916615Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2965 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:42.470041 env[1148]: time="2024-02-12T20:31:42.469945977Z" level=info msg="StopContainer for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" returns successfully"
Feb 12 20:31:42.470765 env[1148]: time="2024-02-12T20:31:42.470708043Z" level=info msg="StopPodSandbox for \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\""
Feb 12 20:31:42.470917 env[1148]: time="2024-02-12T20:31:42.470787131Z" level=info msg="Container to stop \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 20:31:42.470917 env[1148]: time="2024-02-12T20:31:42.470812084Z" level=info msg="Container to stop \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 20:31:42.470917 env[1148]: time="2024-02-12T20:31:42.470831543Z" level=info msg="Container to stop \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 20:31:42.470917 env[1148]: time="2024-02-12T20:31:42.470850336Z" level=info msg="Container to stop \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 20:31:42.470917 env[1148]: time="2024-02-12T20:31:42.470868734Z" level=info msg="Container to stop \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 20:31:42.473704 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb-shm.mount: Deactivated successfully.
Feb 12 20:31:42.481803 kubelet[1499]: E0212 20:31:42.481755    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:42.483812 systemd[1]: cri-containerd-682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb.scope: Deactivated successfully.
Feb 12 20:31:42.513552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb-rootfs.mount: Deactivated successfully.
Feb 12 20:31:42.521458 env[1148]: time="2024-02-12T20:31:42.521391404Z" level=info msg="shim disconnected" id=682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb
Feb 12 20:31:42.521458 env[1148]: time="2024-02-12T20:31:42.521458026Z" level=warning msg="cleaning up after shim disconnected" id=682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb namespace=k8s.io
Feb 12 20:31:42.521886 env[1148]: time="2024-02-12T20:31:42.521471833Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:42.534670 env[1148]: time="2024-02-12T20:31:42.534590310Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2995 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:42.535234 env[1148]: time="2024-02-12T20:31:42.535170607Z" level=info msg="TearDown network for sandbox \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" successfully"
Feb 12 20:31:42.535234 env[1148]: time="2024-02-12T20:31:42.535214578Z" level=info msg="StopPodSandbox for \"682cd1e57ea64c22da8dc472268a8f9abf3855dbafa5b494bf727cf6bec55deb\" returns successfully"
Feb 12 20:31:42.711921 kubelet[1499]: I0212 20:31:42.711765    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-run\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.711921 kubelet[1499]: I0212 20:31:42.711838    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5666768d-2f08-4c77-9a26-ddefddcba6f8-clustermesh-secrets\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.711921 kubelet[1499]: I0212 20:31:42.711866    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-cgroup\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713166 kubelet[1499]: I0212 20:31:42.713133    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-bpf-maps\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713304 kubelet[1499]: I0212 20:31:42.713203    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-lib-modules\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713304 kubelet[1499]: I0212 20:31:42.713237    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-net\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713432 kubelet[1499]: I0212 20:31:42.713304    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-etc-cni-netd\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713432 kubelet[1499]: I0212 20:31:42.713357    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-xtables-lock\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713432 kubelet[1499]: I0212 20:31:42.713391    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-hostproc\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713432 kubelet[1499]: I0212 20:31:42.713428    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-hubble-tls\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713656 kubelet[1499]: I0212 20:31:42.713467    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x9d6\" (UniqueName: \"kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-kube-api-access-6x9d6\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713656 kubelet[1499]: I0212 20:31:42.713500    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cni-path\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713656 kubelet[1499]: I0212 20:31:42.713540    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-config-path\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713656 kubelet[1499]: I0212 20:31:42.713583    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-kernel\") pod \"5666768d-2f08-4c77-9a26-ddefddcba6f8\" (UID: \"5666768d-2f08-4c77-9a26-ddefddcba6f8\") "
Feb 12 20:31:42.713656 kubelet[1499]: I0212 20:31:42.713642    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.713924 kubelet[1499]: I0212 20:31:42.713697    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.713924 kubelet[1499]: I0212 20:31:42.713724    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.713924 kubelet[1499]: I0212 20:31:42.713750    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.713924 kubelet[1499]: I0212 20:31:42.713775    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.713924 kubelet[1499]: I0212 20:31:42.713801    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.714226 kubelet[1499]: I0212 20:31:42.713825    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.714226 kubelet[1499]: I0212 20:31:42.713851    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.714800 kubelet[1499]: I0212 20:31:42.714399    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.714800 kubelet[1499]: I0212 20:31:42.714469    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:42.715362 kubelet[1499]: W0212 20:31:42.715316    1499 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5666768d-2f08-4c77-9a26-ddefddcba6f8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb 12 20:31:42.720462 systemd[1]: var-lib-kubelet-pods-5666768d\x2d2f08\x2d4c77\x2d9a26\x2dddefddcba6f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 12 20:31:42.721784 kubelet[1499]: I0212 20:31:42.721741    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 20:31:42.722326 kubelet[1499]: I0212 20:31:42.722290    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 20:31:42.728454 systemd[1]: var-lib-kubelet-pods-5666768d\x2d2f08\x2d4c77\x2d9a26\x2dddefddcba6f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6x9d6.mount: Deactivated successfully.
Feb 12 20:31:42.731869 systemd[1]: var-lib-kubelet-pods-5666768d\x2d2f08\x2d4c77\x2d9a26\x2dddefddcba6f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 12 20:31:42.733302 kubelet[1499]: I0212 20:31:42.733255    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-kube-api-access-6x9d6" (OuterVolumeSpecName: "kube-api-access-6x9d6") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "kube-api-access-6x9d6". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 20:31:42.733427 kubelet[1499]: I0212 20:31:42.733273    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5666768d-2f08-4c77-9a26-ddefddcba6f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5666768d-2f08-4c77-9a26-ddefddcba6f8" (UID: "5666768d-2f08-4c77-9a26-ddefddcba6f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 20:31:42.805451 kubelet[1499]: I0212 20:31:42.805420    1499 scope.go:115] "RemoveContainer" containerID="d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c"
Feb 12 20:31:42.807704 env[1148]: time="2024-02-12T20:31:42.807651376Z" level=info msg="RemoveContainer for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\""
Feb 12 20:31:42.813532 systemd[1]: Removed slice kubepods-burstable-pod5666768d_2f08_4c77_9a26_ddefddcba6f8.slice.
Feb 12 20:31:42.813700 systemd[1]: kubepods-burstable-pod5666768d_2f08_4c77_9a26_ddefddcba6f8.slice: Consumed 8.863s CPU time.
Feb 12 20:31:42.814565 kubelet[1499]: I0212 20:31:42.814531    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-run\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814565 kubelet[1499]: I0212 20:31:42.814570    1499 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5666768d-2f08-4c77-9a26-ddefddcba6f8-clustermesh-secrets\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814587    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-cgroup\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814603    1499 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-bpf-maps\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814618    1499 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-lib-modules\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814635    1499 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-net\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814651    1499 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-etc-cni-netd\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814667    1499 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-xtables-lock\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814685    1499 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-hostproc\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.814768 kubelet[1499]: I0212 20:31:42.814701    1499 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-hubble-tls\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.815294 kubelet[1499]: I0212 20:31:42.814718    1499 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6x9d6\" (UniqueName: \"kubernetes.io/projected/5666768d-2f08-4c77-9a26-ddefddcba6f8-kube-api-access-6x9d6\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.815294 kubelet[1499]: I0212 20:31:42.814735    1499 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-cni-path\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.815294 kubelet[1499]: I0212 20:31:42.814753    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5666768d-2f08-4c77-9a26-ddefddcba6f8-cilium-config-path\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.815294 kubelet[1499]: I0212 20:31:42.814769    1499 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5666768d-2f08-4c77-9a26-ddefddcba6f8-host-proc-sys-kernel\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:42.816534 env[1148]: time="2024-02-12T20:31:42.816484984Z" level=info msg="RemoveContainer for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" returns successfully"
Feb 12 20:31:42.816873 kubelet[1499]: I0212 20:31:42.816829    1499 scope.go:115] "RemoveContainer" containerID="1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382"
Feb 12 20:31:42.824090 env[1148]: time="2024-02-12T20:31:42.823945438Z" level=info msg="RemoveContainer for \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\""
Feb 12 20:31:42.829431 env[1148]: time="2024-02-12T20:31:42.829377578Z" level=info msg="RemoveContainer for \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\" returns successfully"
Feb 12 20:31:42.832618 kubelet[1499]: I0212 20:31:42.832583    1499 scope.go:115] "RemoveContainer" containerID="439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4"
Feb 12 20:31:42.838027 env[1148]: time="2024-02-12T20:31:42.837957667Z" level=info msg="RemoveContainer for \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\""
Feb 12 20:31:42.843066 env[1148]: time="2024-02-12T20:31:42.842887201Z" level=info msg="RemoveContainer for \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\" returns successfully"
Feb 12 20:31:42.843883 kubelet[1499]: I0212 20:31:42.843838    1499 scope.go:115] "RemoveContainer" containerID="557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08"
Feb 12 20:31:42.845346 env[1148]: time="2024-02-12T20:31:42.845290004Z" level=info msg="RemoveContainer for \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\""
Feb 12 20:31:42.850093 env[1148]: time="2024-02-12T20:31:42.850033930Z" level=info msg="RemoveContainer for \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\" returns successfully"
Feb 12 20:31:42.850326 kubelet[1499]: I0212 20:31:42.850302    1499 scope.go:115] "RemoveContainer" containerID="8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2"
Feb 12 20:31:42.851820 env[1148]: time="2024-02-12T20:31:42.851781842Z" level=info msg="RemoveContainer for \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\""
Feb 12 20:31:42.856424 env[1148]: time="2024-02-12T20:31:42.856374575Z" level=info msg="RemoveContainer for \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\" returns successfully"
Feb 12 20:31:42.856715 kubelet[1499]: I0212 20:31:42.856671    1499 scope.go:115] "RemoveContainer" containerID="d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c"
Feb 12 20:31:42.857243 env[1148]: time="2024-02-12T20:31:42.857087248Z" level=error msg="ContainerStatus for \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\": not found"
Feb 12 20:31:42.857429 kubelet[1499]: E0212 20:31:42.857404    1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\": not found" containerID="d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c"
Feb 12 20:31:42.857525 kubelet[1499]: I0212 20:31:42.857476    1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c} err="failed to get container status \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7839b8f65c61406c2c2cd770d1d7e6e7dab7dd777d892300e82e16abd86494c\": not found"
Feb 12 20:31:42.857525 kubelet[1499]: I0212 20:31:42.857500    1499 scope.go:115] "RemoveContainer" containerID="1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382"
Feb 12 20:31:42.857920 env[1148]: time="2024-02-12T20:31:42.857832932Z" level=error msg="ContainerStatus for \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\": not found"
Feb 12 20:31:42.858197 kubelet[1499]: E0212 20:31:42.858175    1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\": not found" containerID="1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382"
Feb 12 20:31:42.858297 kubelet[1499]: I0212 20:31:42.858217    1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382} err="failed to get container status \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\": rpc error: code = NotFound desc = an error occurred when try to find container \"1da5e00b50ba6cc78fa9753b934401700dd9a76a4f62f1a41fb28c3ba4f9c382\": not found"
Feb 12 20:31:42.858297 kubelet[1499]: I0212 20:31:42.858234    1499 scope.go:115] "RemoveContainer" containerID="439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4"
Feb 12 20:31:42.858689 env[1148]: time="2024-02-12T20:31:42.858606078Z" level=error msg="ContainerStatus for \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\": not found"
Feb 12 20:31:42.859094 kubelet[1499]: E0212 20:31:42.859044    1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\": not found" containerID="439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4"
Feb 12 20:31:42.859094 kubelet[1499]: I0212 20:31:42.859098    1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4} err="failed to get container status \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"439c10dc7c15969e9c6f6239fc41a3ec236ec9084552224ade2b32d76aa079b4\": not found"
Feb 12 20:31:42.859297 kubelet[1499]: I0212 20:31:42.859115    1499 scope.go:115] "RemoveContainer" containerID="557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08"
Feb 12 20:31:42.859510 env[1148]: time="2024-02-12T20:31:42.859424835Z" level=error msg="ContainerStatus for \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\": not found"
Feb 12 20:31:42.859684 kubelet[1499]: E0212 20:31:42.859659    1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\": not found" containerID="557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08"
Feb 12 20:31:42.859786 kubelet[1499]: I0212 20:31:42.859709    1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08} err="failed to get container status \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\": rpc error: code = NotFound desc = an error occurred when try to find container \"557ef8b6c3f1953f0d2abe155198517c52bc6fd94a7fa4b13b9c7036b4347c08\": not found"
Feb 12 20:31:42.859786 kubelet[1499]: I0212 20:31:42.859727    1499 scope.go:115] "RemoveContainer" containerID="8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2"
Feb 12 20:31:42.860120 env[1148]: time="2024-02-12T20:31:42.860050589Z" level=error msg="ContainerStatus for \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\": not found"
Feb 12 20:31:42.860468 kubelet[1499]: E0212 20:31:42.860448    1499 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\": not found" containerID="8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2"
Feb 12 20:31:42.860665 kubelet[1499]: I0212 20:31:42.860616    1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2} err="failed to get container status \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a51cf153fa5f082d42227372d060198eb5d6a90c9e118b0d2209d921d04aab2\": not found"
Feb 12 20:31:43.440478 systemd[1]: Started sshd@8-10.128.0.56:22-36.99.163.171:40942.service.
Feb 12 20:31:43.482966 kubelet[1499]: E0212 20:31:43.482912    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:43.612190 kubelet[1499]: I0212 20:31:43.612156    1499 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=5666768d-2f08-4c77-9a26-ddefddcba6f8 path="/var/lib/kubelet/pods/5666768d-2f08-4c77-9a26-ddefddcba6f8/volumes"
Feb 12 20:31:44.093848 kubelet[1499]: I0212 20:31:44.093805    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:31:44.094148 kubelet[1499]: E0212 20:31:44.093872    1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5666768d-2f08-4c77-9a26-ddefddcba6f8" containerName="mount-cgroup"
Feb 12 20:31:44.094148 kubelet[1499]: E0212 20:31:44.093888    1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5666768d-2f08-4c77-9a26-ddefddcba6f8" containerName="mount-bpf-fs"
Feb 12 20:31:44.094148 kubelet[1499]: E0212 20:31:44.093901    1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5666768d-2f08-4c77-9a26-ddefddcba6f8" containerName="cilium-agent"
Feb 12 20:31:44.094148 kubelet[1499]: E0212 20:31:44.093913    1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5666768d-2f08-4c77-9a26-ddefddcba6f8" containerName="apply-sysctl-overwrites"
Feb 12 20:31:44.094148 kubelet[1499]: E0212 20:31:44.093922    1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5666768d-2f08-4c77-9a26-ddefddcba6f8" containerName="clean-cilium-state"
Feb 12 20:31:44.094148 kubelet[1499]: I0212 20:31:44.093948    1499 memory_manager.go:346] "RemoveStaleState removing state" podUID="5666768d-2f08-4c77-9a26-ddefddcba6f8" containerName="cilium-agent"
Feb 12 20:31:44.101064 systemd[1]: Created slice kubepods-besteffort-poda1c29af8_0c63_43c5_bc36_8a34269ce681.slice.
Feb 12 20:31:44.224876 kubelet[1499]: I0212 20:31:44.224828    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlx2\" (UniqueName: \"kubernetes.io/projected/a1c29af8-0c63-43c5-bc36-8a34269ce681-kube-api-access-nmlx2\") pod \"cilium-operator-574c4bb98d-n7lnq\" (UID: \"a1c29af8-0c63-43c5-bc36-8a34269ce681\") " pod="kube-system/cilium-operator-574c4bb98d-n7lnq"
Feb 12 20:31:44.225083 kubelet[1499]: I0212 20:31:44.225019    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1c29af8-0c63-43c5-bc36-8a34269ce681-cilium-config-path\") pod \"cilium-operator-574c4bb98d-n7lnq\" (UID: \"a1c29af8-0c63-43c5-bc36-8a34269ce681\") " pod="kube-system/cilium-operator-574c4bb98d-n7lnq"
Feb 12 20:31:44.232204 kubelet[1499]: I0212 20:31:44.232162    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:31:44.239165 systemd[1]: Created slice kubepods-burstable-poda5cf6cf9_100b_4628_871c_bfd07597c459.slice.
Feb 12 20:31:44.261662 kubelet[1499]: W0212 20:31:44.261610    1499 reflector.go:533] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.128.0.56" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.56' and this object
Feb 12 20:31:44.261662 kubelet[1499]: E0212 20:31:44.261666    1499 reflector.go:148] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.128.0.56" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.56' and this object
Feb 12 20:31:44.262017 kubelet[1499]: W0212 20:31:44.261610    1499 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.128.0.56" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.56' and this object
Feb 12 20:31:44.262017 kubelet[1499]: E0212 20:31:44.261698    1499 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.128.0.56" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.56' and this object
Feb 12 20:31:44.266176 kubelet[1499]: W0212 20:31:44.266134    1499 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.128.0.56" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.56' and this object
Feb 12 20:31:44.266176 kubelet[1499]: E0212 20:31:44.266183    1499 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.128.0.56" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.56' and this object
Feb 12 20:31:44.405152 env[1148]: time="2024-02-12T20:31:44.404951144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-n7lnq,Uid:a1c29af8-0c63-43c5-bc36-8a34269ce681,Namespace:kube-system,Attempt:0,}"
Feb 12 20:31:44.426757 kubelet[1499]: I0212 20:31:44.426706    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-ipsec-secrets\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.426757 kubelet[1499]: I0212 20:31:44.426769    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-net\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427047 kubelet[1499]: I0212 20:31:44.426812    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4gg\" (UniqueName: \"kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-kube-api-access-nw4gg\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427047 kubelet[1499]: I0212 20:31:44.426847    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-run\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427047 kubelet[1499]: I0212 20:31:44.426878    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-clustermesh-secrets\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427047 kubelet[1499]: I0212 20:31:44.426912    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-xtables-lock\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427047 kubelet[1499]: I0212 20:31:44.426949    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-lib-modules\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427047 kubelet[1499]: I0212 20:31:44.427002    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-cgroup\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427374 kubelet[1499]: I0212 20:31:44.427039    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-etc-cni-netd\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427374 kubelet[1499]: I0212 20:31:44.427073    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-config-path\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427374 kubelet[1499]: I0212 20:31:44.427113    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-bpf-maps\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427374 kubelet[1499]: I0212 20:31:44.427148    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-kernel\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427374 kubelet[1499]: I0212 20:31:44.427184    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-hubble-tls\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427374 kubelet[1499]: I0212 20:31:44.427220    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-hostproc\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.427686 kubelet[1499]: I0212 20:31:44.427253    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cni-path\") pod \"cilium-4t7w9\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") " pod="kube-system/cilium-4t7w9"
Feb 12 20:31:44.434670 env[1148]: time="2024-02-12T20:31:44.434534546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:31:44.434902 env[1148]: time="2024-02-12T20:31:44.434620933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:31:44.434902 env[1148]: time="2024-02-12T20:31:44.434662417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:31:44.435306 env[1148]: time="2024-02-12T20:31:44.435172808Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b82398d40f417a57587d7faa2d0277d1d3eef43d5d3ca41957a80ed70b03470 pid=3025 runtime=io.containerd.runc.v2
Feb 12 20:31:44.466716 systemd[1]: run-containerd-runc-k8s.io-4b82398d40f417a57587d7faa2d0277d1d3eef43d5d3ca41957a80ed70b03470-runc.FG1eM4.mount: Deactivated successfully.
Feb 12 20:31:44.473486 systemd[1]: Started cri-containerd-4b82398d40f417a57587d7faa2d0277d1d3eef43d5d3ca41957a80ed70b03470.scope.
Feb 12 20:31:44.483136 kubelet[1499]: E0212 20:31:44.483094    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:44.544217 env[1148]: time="2024-02-12T20:31:44.544169991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-n7lnq,Uid:a1c29af8-0c63-43c5-bc36-8a34269ce681,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b82398d40f417a57587d7faa2d0277d1d3eef43d5d3ca41957a80ed70b03470\""
Feb 12 20:31:44.551555 kubelet[1499]: E0212 20:31:44.551508    1499 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
Feb 12 20:31:44.552107 env[1148]: time="2024-02-12T20:31:44.552058939Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 12 20:31:44.554279 kubelet[1499]: E0212 20:31:44.554242    1499 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 12 20:31:44.777002 sshd[3012]: Failed password for root from 36.99.163.171 port 40942 ssh2
Feb 12 20:31:45.031115 sshd[3012]: Received disconnect from 36.99.163.171 port 40942:11: Bye Bye [preauth]
Feb 12 20:31:45.031115 sshd[3012]: Disconnected from authenticating user root 36.99.163.171 port 40942 [preauth]
Feb 12 20:31:45.032880 systemd[1]: sshd@8-10.128.0.56:22-36.99.163.171:40942.service: Deactivated successfully.
Feb 12 20:31:45.427510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648500537.mount: Deactivated successfully.
Feb 12 20:31:45.483289 kubelet[1499]: E0212 20:31:45.483219    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:45.530379 kubelet[1499]: E0212 20:31:45.530321    1499 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition
Feb 12 20:31:45.530379 kubelet[1499]: E0212 20:31:45.530362    1499 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-4t7w9: failed to sync secret cache: timed out waiting for the condition
Feb 12 20:31:45.530855 kubelet[1499]: E0212 20:31:45.530796    1499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-hubble-tls podName:a5cf6cf9-100b-4628-871c-bfd07597c459 nodeName:}" failed. No retries permitted until 2024-02-12 20:31:46.030476178 +0000 UTC m=+67.212532123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-hubble-tls") pod "cilium-4t7w9" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459") : failed to sync secret cache: timed out waiting for the condition
Feb 12 20:31:45.683582 sshd[2020]: Connection closed by 36.99.163.171 port 34474 [preauth]
Feb 12 20:31:45.685851 systemd[1]: sshd@6-10.128.0.56:22-36.99.163.171:34474.service: Deactivated successfully.
Feb 12 20:31:46.051937 env[1148]: time="2024-02-12T20:31:46.051803392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4t7w9,Uid:a5cf6cf9-100b-4628-871c-bfd07597c459,Namespace:kube-system,Attempt:0,}"
Feb 12 20:31:46.077481 env[1148]: time="2024-02-12T20:31:46.077382485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:31:46.077768 env[1148]: time="2024-02-12T20:31:46.077704170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:31:46.077941 env[1148]: time="2024-02-12T20:31:46.077907291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:31:46.078380 env[1148]: time="2024-02-12T20:31:46.078335191Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7 pid=3074 runtime=io.containerd.runc.v2
Feb 12 20:31:46.106166 systemd[1]: Started cri-containerd-3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7.scope.
Feb 12 20:31:46.152785 env[1148]: time="2024-02-12T20:31:46.152721618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4t7w9,Uid:a5cf6cf9-100b-4628-871c-bfd07597c459,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7\""
Feb 12 20:31:46.157200 env[1148]: time="2024-02-12T20:31:46.157148050Z" level=info msg="CreateContainer within sandbox \"3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 20:31:46.188066 env[1148]: time="2024-02-12T20:31:46.187958347Z" level=info msg="CreateContainer within sandbox \"3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\""
Feb 12 20:31:46.189335 env[1148]: time="2024-02-12T20:31:46.189289137Z" level=info msg="StartContainer for \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\""
Feb 12 20:31:46.220426 systemd[1]: Started cri-containerd-21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e.scope.
Feb 12 20:31:46.245325 systemd[1]: cri-containerd-21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e.scope: Deactivated successfully.
Feb 12 20:31:46.406218 env[1148]: time="2024-02-12T20:31:46.406148085Z" level=info msg="shim disconnected" id=21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e
Feb 12 20:31:46.406724 env[1148]: time="2024-02-12T20:31:46.406691543Z" level=warning msg="cleaning up after shim disconnected" id=21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e namespace=k8s.io
Feb 12 20:31:46.406879 env[1148]: time="2024-02-12T20:31:46.406854848Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:46.420155 env[1148]: time="2024-02-12T20:31:46.420094098Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3130 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:31:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Feb 12 20:31:46.420832 env[1148]: time="2024-02-12T20:31:46.420695943Z" level=error msg="copy shim log" error="read /proc/self/fd/93: file already closed"
Feb 12 20:31:46.421233 env[1148]: time="2024-02-12T20:31:46.421175146Z" level=error msg="Failed to pipe stdout of container \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\"" error="reading from a closed fifo"
Feb 12 20:31:46.422098 env[1148]: time="2024-02-12T20:31:46.422039343Z" level=error msg="Failed to pipe stderr of container \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\"" error="reading from a closed fifo"
Feb 12 20:31:46.425051 env[1148]: time="2024-02-12T20:31:46.424929276Z" level=error msg="StartContainer for \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Feb 12 20:31:46.425357 kubelet[1499]: E0212 20:31:46.425308    1499 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e"
Feb 12 20:31:46.425488 kubelet[1499]: E0212 20:31:46.425470    1499 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Feb 12 20:31:46.425488 kubelet[1499]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Feb 12 20:31:46.425488 kubelet[1499]: rm /hostbin/cilium-mount
Feb 12 20:31:46.425645 kubelet[1499]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nw4gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-4t7w9_kube-system(a5cf6cf9-100b-4628-871c-bfd07597c459): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Feb 12 20:31:46.425645 kubelet[1499]: E0212 20:31:46.425529    1499 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4t7w9" podUID=a5cf6cf9-100b-4628-871c-bfd07597c459
Feb 12 20:31:46.483639 kubelet[1499]: E0212 20:31:46.483581    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:46.484272 env[1148]: time="2024-02-12T20:31:46.483815119Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:46.487355 env[1148]: time="2024-02-12T20:31:46.487298596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:46.490147 env[1148]: time="2024-02-12T20:31:46.490091158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb 12 20:31:46.491059 env[1148]: time="2024-02-12T20:31:46.490988238Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb 12 20:31:46.493932 env[1148]: time="2024-02-12T20:31:46.493872715Z" level=info msg="CreateContainer within sandbox \"4b82398d40f417a57587d7faa2d0277d1d3eef43d5d3ca41957a80ed70b03470\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 12 20:31:46.518807 env[1148]: time="2024-02-12T20:31:46.518727586Z" level=info msg="CreateContainer within sandbox \"4b82398d40f417a57587d7faa2d0277d1d3eef43d5d3ca41957a80ed70b03470\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ae6598801a00d257fd121968b1929342568a71e79c118633fb0b3dbb1d841546\""
Feb 12 20:31:46.519842 env[1148]: time="2024-02-12T20:31:46.519765027Z" level=info msg="StartContainer for \"ae6598801a00d257fd121968b1929342568a71e79c118633fb0b3dbb1d841546\""
Feb 12 20:31:46.553105 systemd[1]: Started cri-containerd-ae6598801a00d257fd121968b1929342568a71e79c118633fb0b3dbb1d841546.scope.
Feb 12 20:31:46.599112 env[1148]: time="2024-02-12T20:31:46.598962209Z" level=info msg="StartContainer for \"ae6598801a00d257fd121968b1929342568a71e79c118633fb0b3dbb1d841546\" returns successfully"
Feb 12 20:31:46.823771 env[1148]: time="2024-02-12T20:31:46.823701922Z" level=info msg="StopPodSandbox for \"3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7\""
Feb 12 20:31:46.824029 env[1148]: time="2024-02-12T20:31:46.823802776Z" level=info msg="Container to stop \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 12 20:31:46.832811 systemd[1]: cri-containerd-3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7.scope: Deactivated successfully.
Feb 12 20:31:46.859610 kubelet[1499]: I0212 20:31:46.858938    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-n7lnq" podStartSLOduration=0.913469232 podCreationTimestamp="2024-02-12 20:31:44 +0000 UTC" firstStartedPulling="2024-02-12 20:31:44.546017526 +0000 UTC m=+65.728073447" lastFinishedPulling="2024-02-12 20:31:46.491411275 +0000 UTC m=+67.673467221" observedRunningTime="2024-02-12 20:31:46.836001451 +0000 UTC m=+68.018057395" watchObservedRunningTime="2024-02-12 20:31:46.858863006 +0000 UTC m=+68.040918957"
Feb 12 20:31:46.885170 env[1148]: time="2024-02-12T20:31:46.885094540Z" level=info msg="shim disconnected" id=3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7
Feb 12 20:31:46.885170 env[1148]: time="2024-02-12T20:31:46.885161977Z" level=warning msg="cleaning up after shim disconnected" id=3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7 namespace=k8s.io
Feb 12 20:31:46.885170 env[1148]: time="2024-02-12T20:31:46.885177881Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:46.896369 env[1148]: time="2024-02-12T20:31:46.896299657Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3200 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:46.896811 env[1148]: time="2024-02-12T20:31:46.896758946Z" level=info msg="TearDown network for sandbox \"3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7\" successfully"
Feb 12 20:31:46.896811 env[1148]: time="2024-02-12T20:31:46.896804406Z" level=info msg="StopPodSandbox for \"3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7\" returns successfully"
Feb 12 20:31:46.947438 kubelet[1499]: I0212 20:31:46.947381    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-hostproc\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:46.947880 kubelet[1499]: I0212 20:31:46.947816    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-hostproc" (OuterVolumeSpecName: "hostproc") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.048372 kubelet[1499]: I0212 20:31:47.048302    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-xtables-lock\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048372 kubelet[1499]: I0212 20:31:47.048364    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-cgroup\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048394    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-etc-cni-netd\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048432    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-config-path\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048459    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-bpf-maps\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048487    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-net\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048518    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw4gg\" (UniqueName: \"kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-kube-api-access-nw4gg\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048551    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-kernel\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048582    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-hubble-tls\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.048659 kubelet[1499]: I0212 20:31:47.048609    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-run\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.049142 kubelet[1499]: I0212 20:31:47.048667    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-lib-modules\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.049142 kubelet[1499]: I0212 20:31:47.048704    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-clustermesh-secrets\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.049142 kubelet[1499]: I0212 20:31:47.048739    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cni-path\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.049142 kubelet[1499]: I0212 20:31:47.048774    1499 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-ipsec-secrets\") pod \"a5cf6cf9-100b-4628-871c-bfd07597c459\" (UID: \"a5cf6cf9-100b-4628-871c-bfd07597c459\") "
Feb 12 20:31:47.049142 kubelet[1499]: I0212 20:31:47.048820    1499 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-hostproc\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.052395 kubelet[1499]: I0212 20:31:47.052343    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.052858 kubelet[1499]: I0212 20:31:47.052807    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.052996 kubelet[1499]: I0212 20:31:47.052877    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.052996 kubelet[1499]: I0212 20:31:47.052904    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.053183 kubelet[1499]: W0212 20:31:47.053140    1499 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a5cf6cf9-100b-4628-871c-bfd07597c459/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb 12 20:31:47.054249 kubelet[1499]: I0212 20:31:47.054212    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.056161 kubelet[1499]: I0212 20:31:47.056120    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.056283 kubelet[1499]: I0212 20:31:47.056189    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.056642 kubelet[1499]: I0212 20:31:47.056607    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 12 20:31:47.056937 kubelet[1499]: I0212 20:31:47.056736    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.057186 kubelet[1499]: I0212 20:31:47.057159    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 20:31:47.057397 kubelet[1499]: I0212 20:31:47.057372    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-kube-api-access-nw4gg" (OuterVolumeSpecName: "kube-api-access-nw4gg") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "kube-api-access-nw4gg". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 20:31:47.057560 kubelet[1499]: I0212 20:31:47.057537    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cni-path" (OuterVolumeSpecName: "cni-path") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 12 20:31:47.061565 kubelet[1499]: I0212 20:31:47.061522    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 12 20:31:47.061701 kubelet[1499]: I0212 20:31:47.061586    1499 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a5cf6cf9-100b-4628-871c-bfd07597c459" (UID: "a5cf6cf9-100b-4628-871c-bfd07597c459"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.148994    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-run\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149049    1499 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-lib-modules\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149067    1499 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-hubble-tls\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149084    1499 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cni-path\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149100    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-ipsec-secrets\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149116    1499 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5cf6cf9-100b-4628-871c-bfd07597c459-clustermesh-secrets\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149183    1499 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-net\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149208    1499 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nw4gg\" (UniqueName: \"kubernetes.io/projected/a5cf6cf9-100b-4628-871c-bfd07597c459-kube-api-access-nw4gg\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149227    1499 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-xtables-lock\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149245    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-cgroup\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149261    1499 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-etc-cni-netd\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149278    1499 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5cf6cf9-100b-4628-871c-bfd07597c459-cilium-config-path\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149295    1499 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-bpf-maps\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.149373 kubelet[1499]: I0212 20:31:47.149313    1499 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5cf6cf9-100b-4628-871c-bfd07597c459-host-proc-sys-kernel\") on node \"10.128.0.56\" DevicePath \"\""
Feb 12 20:31:47.342095 systemd[1]: run-containerd-runc-k8s.io-ae6598801a00d257fd121968b1929342568a71e79c118633fb0b3dbb1d841546-runc.cxHlwM.mount: Deactivated successfully.
Feb 12 20:31:47.342253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7-rootfs.mount: Deactivated successfully.
Feb 12 20:31:47.342355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c8abd09d099e72fff61d89cda61f2f2bd5d5f24d738015a768b60194af41ec7-shm.mount: Deactivated successfully.
Feb 12 20:31:47.342453 systemd[1]: var-lib-kubelet-pods-a5cf6cf9\x2d100b\x2d4628\x2d871c\x2dbfd07597c459-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 12 20:31:47.342552 systemd[1]: var-lib-kubelet-pods-a5cf6cf9\x2d100b\x2d4628\x2d871c\x2dbfd07597c459-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb 12 20:31:47.342665 systemd[1]: var-lib-kubelet-pods-a5cf6cf9\x2d100b\x2d4628\x2d871c\x2dbfd07597c459-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 12 20:31:47.342769 systemd[1]: var-lib-kubelet-pods-a5cf6cf9\x2d100b\x2d4628\x2d871c\x2dbfd07597c459-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnw4gg.mount: Deactivated successfully.
Feb 12 20:31:47.484384 kubelet[1499]: E0212 20:31:47.484197    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:47.615618 systemd[1]: Removed slice kubepods-burstable-poda5cf6cf9_100b_4628_871c_bfd07597c459.slice.
Feb 12 20:31:47.828579 kubelet[1499]: I0212 20:31:47.828549    1499 scope.go:115] "RemoveContainer" containerID="21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e"
Feb 12 20:31:47.831922 env[1148]: time="2024-02-12T20:31:47.831838071Z" level=info msg="RemoveContainer for \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\""
Feb 12 20:31:47.844521 env[1148]: time="2024-02-12T20:31:47.844458160Z" level=info msg="RemoveContainer for \"21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e\" returns successfully"
Feb 12 20:31:47.871682 kubelet[1499]: I0212 20:31:47.871627    1499 topology_manager.go:212] "Topology Admit Handler"
Feb 12 20:31:47.871909 kubelet[1499]: E0212 20:31:47.871752    1499 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5cf6cf9-100b-4628-871c-bfd07597c459" containerName="mount-cgroup"
Feb 12 20:31:47.871909 kubelet[1499]: I0212 20:31:47.871788    1499 memory_manager.go:346] "RemoveStaleState removing state" podUID="a5cf6cf9-100b-4628-871c-bfd07597c459" containerName="mount-cgroup"
Feb 12 20:31:47.878853 systemd[1]: Created slice kubepods-burstable-pod6ed7a40e_7ee9_424b_9b60_89abdb0e0a01.slice.
Feb 12 20:31:47.954367 kubelet[1499]: I0212 20:31:47.954324    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-cilium-cgroup\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.954777 kubelet[1499]: I0212 20:31:47.954750    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-xtables-lock\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955188 kubelet[1499]: I0212 20:31:47.955129    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-cilium-ipsec-secrets\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955589 kubelet[1499]: I0212 20:31:47.955558    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-cilium-run\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955702 kubelet[1499]: I0212 20:31:47.955606    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-bpf-maps\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955702 kubelet[1499]: I0212 20:31:47.955638    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-hostproc\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955702 kubelet[1499]: I0212 20:31:47.955675    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-etc-cni-netd\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955887 kubelet[1499]: I0212 20:31:47.955718    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-host-proc-sys-kernel\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955887 kubelet[1499]: I0212 20:31:47.955762    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-hubble-tls\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955887 kubelet[1499]: I0212 20:31:47.955797    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdlp\" (UniqueName: \"kubernetes.io/projected/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-kube-api-access-5bdlp\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955887 kubelet[1499]: I0212 20:31:47.955832    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-cilium-config-path\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.955887 kubelet[1499]: I0212 20:31:47.955867    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-lib-modules\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.956492 kubelet[1499]: I0212 20:31:47.955902    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-clustermesh-secrets\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.956492 kubelet[1499]: I0212 20:31:47.955941    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-host-proc-sys-net\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:47.956492 kubelet[1499]: I0212 20:31:47.956008    1499 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ed7a40e-7ee9-424b-9b60-89abdb0e0a01-cni-path\") pod \"cilium-v2jsf\" (UID: \"6ed7a40e-7ee9-424b-9b60-89abdb0e0a01\") " pod="kube-system/cilium-v2jsf"
Feb 12 20:31:48.187911 env[1148]: time="2024-02-12T20:31:48.187727458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2jsf,Uid:6ed7a40e-7ee9-424b-9b60-89abdb0e0a01,Namespace:kube-system,Attempt:0,}"
Feb 12 20:31:48.211905 env[1148]: time="2024-02-12T20:31:48.211797063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 12 20:31:48.211905 env[1148]: time="2024-02-12T20:31:48.211848442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 12 20:31:48.212226 env[1148]: time="2024-02-12T20:31:48.211867225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 12 20:31:48.212357 env[1148]: time="2024-02-12T20:31:48.212283533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980 pid=3231 runtime=io.containerd.runc.v2
Feb 12 20:31:48.231059 systemd[1]: Started cri-containerd-4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980.scope.
Feb 12 20:31:48.268245 env[1148]: time="2024-02-12T20:31:48.267413133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2jsf,Uid:6ed7a40e-7ee9-424b-9b60-89abdb0e0a01,Namespace:kube-system,Attempt:0,} returns sandbox id \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\""
Feb 12 20:31:48.271601 env[1148]: time="2024-02-12T20:31:48.271528348Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 12 20:31:48.291836 env[1148]: time="2024-02-12T20:31:48.291756335Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c\""
Feb 12 20:31:48.292747 env[1148]: time="2024-02-12T20:31:48.292649021Z" level=info msg="StartContainer for \"05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c\""
Feb 12 20:31:48.315337 systemd[1]: Started cri-containerd-05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c.scope.
Feb 12 20:31:48.367223 env[1148]: time="2024-02-12T20:31:48.367143002Z" level=info msg="StartContainer for \"05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c\" returns successfully"
Feb 12 20:31:48.377419 systemd[1]: cri-containerd-05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c.scope: Deactivated successfully.
Feb 12 20:31:48.404870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c-rootfs.mount: Deactivated successfully.
Feb 12 20:31:48.415907 env[1148]: time="2024-02-12T20:31:48.415843167Z" level=info msg="shim disconnected" id=05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c
Feb 12 20:31:48.415907 env[1148]: time="2024-02-12T20:31:48.415906468Z" level=warning msg="cleaning up after shim disconnected" id=05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c namespace=k8s.io
Feb 12 20:31:48.416321 env[1148]: time="2024-02-12T20:31:48.415919973Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:48.428083 env[1148]: time="2024-02-12T20:31:48.427964435Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3315 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:48.484906 kubelet[1499]: E0212 20:31:48.484744    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:48.835658 env[1148]: time="2024-02-12T20:31:48.835582407Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 12 20:31:48.861497 env[1148]: time="2024-02-12T20:31:48.861425800Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18\""
Feb 12 20:31:48.862424 env[1148]: time="2024-02-12T20:31:48.862356466Z" level=info msg="StartContainer for \"9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18\""
Feb 12 20:31:48.893018 systemd[1]: Started cri-containerd-9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18.scope.
Feb 12 20:31:48.929806 env[1148]: time="2024-02-12T20:31:48.929742279Z" level=info msg="StartContainer for \"9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18\" returns successfully"
Feb 12 20:31:48.939203 systemd[1]: cri-containerd-9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18.scope: Deactivated successfully.
Feb 12 20:31:48.973742 env[1148]: time="2024-02-12T20:31:48.973674135Z" level=info msg="shim disconnected" id=9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18
Feb 12 20:31:48.973742 env[1148]: time="2024-02-12T20:31:48.973745066Z" level=warning msg="cleaning up after shim disconnected" id=9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18 namespace=k8s.io
Feb 12 20:31:48.974135 env[1148]: time="2024-02-12T20:31:48.973761021Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:48.991507 env[1148]: time="2024-02-12T20:31:48.991418782Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3376 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:49.342432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18-rootfs.mount: Deactivated successfully.
Feb 12 20:31:49.485041 kubelet[1499]: E0212 20:31:49.484919    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:49.518139 kubelet[1499]: W0212 20:31:49.518037    1499 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5cf6cf9_100b_4628_871c_bfd07597c459.slice/cri-containerd-21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e.scope WatchSource:0}: container "21c066eb74e91e1a9ccbb6171a9fc031f66bc239f8cc460af1f3c7ae24e8816e" in namespace "k8s.io": not found
Feb 12 20:31:49.556000 kubelet[1499]: E0212 20:31:49.555944    1499 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 12 20:31:49.612022 kubelet[1499]: I0212 20:31:49.611878    1499 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a5cf6cf9-100b-4628-871c-bfd07597c459 path="/var/lib/kubelet/pods/a5cf6cf9-100b-4628-871c-bfd07597c459/volumes"
Feb 12 20:31:49.840407 env[1148]: time="2024-02-12T20:31:49.840210358Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 12 20:31:49.863713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384485998.mount: Deactivated successfully.
Feb 12 20:31:49.870452 env[1148]: time="2024-02-12T20:31:49.870384423Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442\""
Feb 12 20:31:49.871098 env[1148]: time="2024-02-12T20:31:49.871019150Z" level=info msg="StartContainer for \"b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442\""
Feb 12 20:31:49.909131 systemd[1]: Started cri-containerd-b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442.scope.
Feb 12 20:31:49.953480 env[1148]: time="2024-02-12T20:31:49.953412812Z" level=info msg="StartContainer for \"b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442\" returns successfully"
Feb 12 20:31:49.957113 systemd[1]: cri-containerd-b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442.scope: Deactivated successfully.
Feb 12 20:31:49.990116 env[1148]: time="2024-02-12T20:31:49.990052763Z" level=info msg="shim disconnected" id=b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442
Feb 12 20:31:49.990438 env[1148]: time="2024-02-12T20:31:49.990139292Z" level=warning msg="cleaning up after shim disconnected" id=b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442 namespace=k8s.io
Feb 12 20:31:49.990438 env[1148]: time="2024-02-12T20:31:49.990155385Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:50.001508 env[1148]: time="2024-02-12T20:31:50.001460745Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3433 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:50.342493 systemd[1]: run-containerd-runc-k8s.io-b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442-runc.WcOv3j.mount: Deactivated successfully.
Feb 12 20:31:50.342652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442-rootfs.mount: Deactivated successfully.
Feb 12 20:31:50.485325 kubelet[1499]: E0212 20:31:50.485210    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:50.846253 env[1148]: time="2024-02-12T20:31:50.846196035Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 12 20:31:50.866281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657871617.mount: Deactivated successfully.
Feb 12 20:31:50.878113 env[1148]: time="2024-02-12T20:31:50.878038957Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321\""
Feb 12 20:31:50.879186 env[1148]: time="2024-02-12T20:31:50.879130735Z" level=info msg="StartContainer for \"eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321\""
Feb 12 20:31:50.903947 systemd[1]: Started cri-containerd-eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321.scope.
Feb 12 20:31:50.943669 systemd[1]: cri-containerd-eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321.scope: Deactivated successfully.
Feb 12 20:31:50.946683 env[1148]: time="2024-02-12T20:31:50.945817754Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed7a40e_7ee9_424b_9b60_89abdb0e0a01.slice/cri-containerd-eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321.scope/memory.events\": no such file or directory"
Feb 12 20:31:50.949481 env[1148]: time="2024-02-12T20:31:50.949403051Z" level=info msg="StartContainer for \"eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321\" returns successfully"
Feb 12 20:31:50.980447 env[1148]: time="2024-02-12T20:31:50.980381752Z" level=info msg="shim disconnected" id=eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321
Feb 12 20:31:50.980753 env[1148]: time="2024-02-12T20:31:50.980460965Z" level=warning msg="cleaning up after shim disconnected" id=eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321 namespace=k8s.io
Feb 12 20:31:50.980753 env[1148]: time="2024-02-12T20:31:50.980477739Z" level=info msg="cleaning up dead shim"
Feb 12 20:31:50.992363 env[1148]: time="2024-02-12T20:31:50.992301118Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3490 runtime=io.containerd.runc.v2\n"
Feb 12 20:31:51.342845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321-rootfs.mount: Deactivated successfully.
Feb 12 20:31:51.486205 kubelet[1499]: E0212 20:31:51.486140    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:51.859248 env[1148]: time="2024-02-12T20:31:51.859175640Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 12 20:31:51.886417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215152599.mount: Deactivated successfully.
Feb 12 20:31:51.894395 env[1148]: time="2024-02-12T20:31:51.894315544Z" level=info msg="CreateContainer within sandbox \"4920239c77771b1dd328cc3ffdda4fe4a3c25f71cc55295baafd9c283e097980\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633\""
Feb 12 20:31:51.895419 env[1148]: time="2024-02-12T20:31:51.895381206Z" level=info msg="StartContainer for \"0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633\""
Feb 12 20:31:51.920777 systemd[1]: Started cri-containerd-0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633.scope.
Feb 12 20:31:51.968009 env[1148]: time="2024-02-12T20:31:51.967237929Z" level=info msg="StartContainer for \"0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633\" returns successfully"
Feb 12 20:31:52.398025 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb 12 20:31:52.486858 kubelet[1499]: E0212 20:31:52.486765    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:52.634599 kubelet[1499]: W0212 20:31:52.634535    1499 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed7a40e_7ee9_424b_9b60_89abdb0e0a01.slice/cri-containerd-05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c.scope WatchSource:0}: task 05b0ba51d139ad6c8a2705206cce87f7cc46c26078e1b3a4be79360f34d1d92c not found: not found
Feb 12 20:31:52.877848 kubelet[1499]: I0212 20:31:52.877435    1499 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v2jsf" podStartSLOduration=5.877388614 podCreationTimestamp="2024-02-12 20:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:31:52.877065436 +0000 UTC m=+74.059121382" watchObservedRunningTime="2024-02-12 20:31:52.877388614 +0000 UTC m=+74.059444561"
Feb 12 20:31:52.932116 kubelet[1499]: I0212 20:31:52.932083    1499 setters.go:548] "Node became not ready" node="10.128.0.56" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:31:52.932017925 +0000 UTC m=+74.114073924 LastTransitionTime:2024-02-12 20:31:52.932017925 +0000 UTC m=+74.114073924 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
Feb 12 20:31:53.487290 kubelet[1499]: E0212 20:31:53.487223    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:54.488236 kubelet[1499]: E0212 20:31:54.488184    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:55.329452 systemd-networkd[1025]: lxc_health: Link UP
Feb 12 20:31:55.341160 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb 12 20:31:55.346182 systemd-networkd[1025]: lxc_health: Gained carrier
Feb 12 20:31:55.488558 kubelet[1499]: E0212 20:31:55.488458    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:55.743598 kubelet[1499]: W0212 20:31:55.743439    1499 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed7a40e_7ee9_424b_9b60_89abdb0e0a01.slice/cri-containerd-9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18.scope WatchSource:0}: task 9b0fa94cf22500f8475ebc485312bd7fba4dcecce734cc74decf0fface74fb18 not found: not found
Feb 12 20:31:55.882353 systemd[1]: run-containerd-runc-k8s.io-0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633-runc.6hcZwf.mount: Deactivated successfully.
Feb 12 20:31:56.489138 kubelet[1499]: E0212 20:31:56.489080    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:57.136764 systemd-networkd[1025]: lxc_health: Gained IPv6LL
Feb 12 20:31:57.490396 kubelet[1499]: E0212 20:31:57.490232    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:58.152097 systemd[1]: run-containerd-runc-k8s.io-0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633-runc.ikbhKq.mount: Deactivated successfully.
Feb 12 20:31:58.491052 kubelet[1499]: E0212 20:31:58.490832    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:58.858436 kubelet[1499]: W0212 20:31:58.858372    1499 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed7a40e_7ee9_424b_9b60_89abdb0e0a01.slice/cri-containerd-b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442.scope WatchSource:0}: task b5db2d1966b189b2fcb267dde538e1ec8ec831823e2f6ba0161a3bf2f58b1442 not found: not found
Feb 12 20:31:59.434091 kubelet[1499]: E0212 20:31:59.434037    1499 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:31:59.491591 kubelet[1499]: E0212 20:31:59.491531    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:00.472034 systemd[1]: run-containerd-runc-k8s.io-0ce678a56846f196500949df62ce0e2b0ab062736c2dd510f83f2488d4edb633-runc.90QohV.mount: Deactivated successfully.
Feb 12 20:32:00.493513 kubelet[1499]: E0212 20:32:00.493412    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:01.494115 kubelet[1499]: E0212 20:32:01.494054    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:01.975789 kubelet[1499]: W0212 20:32:01.975732    1499 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ed7a40e_7ee9_424b_9b60_89abdb0e0a01.slice/cri-containerd-eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321.scope WatchSource:0}: task eb82d13a3fb89a84f052f61f662c9b193228e7d5d634d9203f8505c72c20d321 not found: not found
Feb 12 20:32:02.494604 kubelet[1499]: E0212 20:32:02.494514    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:03.495224 kubelet[1499]: E0212 20:32:03.495164    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:04.495750 kubelet[1499]: E0212 20:32:04.495669    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:05.496295 kubelet[1499]: E0212 20:32:05.496219    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 12 20:32:06.496814 kubelet[1499]: E0212 20:32:06.496753    1499 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"